Postgresql inplace database upgrade - database

I'm trying to find a fastest way to move postgresql
database v9.1.20 from one server to another server
with postgres v9.3.10.
Scenario as follows:
Production server running Ubuntu 12.04
postgresql 9.1.20, database size appox 250g
Target server we are trying to relocate on is
Ubuntu 14.04 postgresq 9.3.10.
The very first attempt we are tried to experiment with
was to dump database (pg_dump) from old server and restore
it on the new server (pg_restore).
It has worked just fine but the time we spent to
relocate is about 4 hours (pg_dump takes 3 hours
and pg_restore takes 1 hour (network link 1g,
SSD disks on both servers).
Total downtime in 4 hours is not acceptable.
The next attempt was to use pg_basebackup instead of
pg_dump. The method has reduced backup time up to 40 mins
instead of 3 hours which is acceptable.
However we cannot use dump provided by the pg_basebackup
due to version incompatibility.
I had read many articles on how to provide inplace database
upgrade but it seems they are all referring to upgrade on the
SAME server.
So my question - how I can upgrade the database backup produced
by the pg_basebackup on the server without having previous
postgresql serve binaries installed?
Thanks.

You can perform upgrade using repmgr and pg_upgrade with minimal downtime (several minutes).
On master (postgresql 9.1) enable streaming replication. DB restart is required
hot_standby = on
wal_keep_segments = 4000 # must be hight enough for standby to catch up
On standby server install PostgreSQL 9.1 and 9.3. On both servers install repmgr 2.0. Because repmgr 3.x versions work with PostgreSQL 9.3 or higher.
Synchronize master and standby server:
repmgr -D /var/lib/postgresql/9.1/main -p 5432 -d repmgr -U repmgr --verbose standby clone -h psql01a.example.net --fast-checkpoint
Underneath pg_basebackup should be used. So this method is pretty much the same as the one you've described.
Once cloning is finished you can start register standby:
repmgr -f /etc/repmgr.conf standby register
service postgresql start
This would allow standby to catch up changes committed on the master during replication.
With both databases running (in sync) check if upgrade is possible:
/usr/lib/postgresql/9.3/bin/pg_upgrade --check \
--old-datadir=/var/lib/postgresql/9.1/main \
--new-datadir=/var/lib/postgresql/9.3/main \
--old-bindir=/usr/lib/postgresql/9.1/bin \
--new-bindir=/usr/lib/postgresql/9.3/bin -v
If you pass upgrade checks you have to stop the master. Now the downtime period comes. Promote standby to master.
postgres#: repmgr standby promote
service postgresql start
# if everything looks ok, just stop the server
service postgresql stop
The fastest upgrade method is using links:
/usr/lib/postgresql/9.3/bin/pg_upgrade --link \
--old-datadir=/var/lib/postgresql/9.1/main \
--new-datadir=/var/lib/postgresql/9.3/main \
--old-bindir=/usr/lib/postgresql/9.1/bin \
--new-bindir=/usr/lib/postgresql/9.3/bin
Even with 200GB data the upgrade shouldn't take longer than few minutes (typically less than one minute). When things go south, it's hard to revert changes (but we'd still have functional master server). Once upgrade is finished, start the new server. Verify everything is ok and then you can safely delete the old cluster:
./delete_old_cluster.sh

Related

postgresql restoring from dumpall file. Indexes and Constraints not being created

I'm upgrading from and old psql version to a newer (9.5 to 15 (for Ubuntu 20.04)) and when i'm trying to restore from dumpfile everything seems fine except that indexes are not being created.
I made the dumpfile by doing
pg_dumpall > dbdump.dump
Then, after upgrading cluster and everything i'm restoring the DB by doing
psql -f dbdump.dump postgres as suggested by PostgreSQL Documentation.
When i check indexes with select * from pg_indexes where tablename not like 'pg%'; there's nothing.
There should be at least 10K+ of indexes (yes, the DB has more than 10K tables, so creating the indexes by hand is not an option)
What could be wrong?
In this case you should check the PostgreSQL log file. If the restore fails, there must be an error in the log file, or even in the console. There are a few more details that would help clarify the situation. For example, are you using a GUI tool for restoring? Or are you running the command from the CLI?

Kiwi-tcms v8.0 database migration problem with test runs

I just migrated Kiwi tcms from v7.1 on a server A to v8.0 on a server B (I virtualized our application server).
I use a pgsql container for the db.
For the migration I used the backup-restore method.
The db volume on server B is brand new and I created the schema using /Kiwi/manage.py migrate
Then I restored the .json. But I ran into schema problem because of the changes in v8.0 of the primary key. I just replaced in the json file the old names by the new one and the restoration passed.
The application runs fine except that I have now a problem with test runs when I go to the "search for test runs" page:
DataTables warning: table id=resultsTable - Requested unknown parameter 'run_id' for row 0, column 0. For more information about this error, please see http://datatables.net/tn/4
I am not sure if I made any mistake during the migration or if there is actually a bug in the migration process.
For info: I tried on my test container in v7.3 to migrate the database in v8.0. Everything went fine but I had the same problem at the end.
Thank you by advance for you support !
EDIT 1
I solved my problem following the advice of #Alexander Todorov and restarting from scratch the migration steps:
I uploaded the image of kiwi 7.1 on my docker registry so I migrated to my server B with kiwi 7.1. Now I can focus on upgrade.
I use pgsql container, so I didn't need to update mariadb container before migrating.
I upgraded to kiwi 8.0 using the kiwitcms/kiwi:latest image.
I am not sure why the upgrade from the 7.3 version didn't work the first time but from the 7.1 to 8.0 everything went fine.
It is possible that I backed up from 7.1 and restored on a 7.3. Everything was fine using 7.3 at this moment but I had trouble with the migration to 8.0...
Anyway, thanks for your support !
EDIT 2
I don't know how it is possible but, I can't reproduce the exact same migration on another instance. I get the issue each time I try.
What I have now:
A preprod instance of kiwi working fine in v8.0
What I want:
A production instance in v8.0
What I tried:
Create a new instance in v8.0 and backup the preprod db to restore it on the prod => fails with the error I had before...
DataTables warning: table id=resultsTable - Requested unknown parameter 'run_id' for row 0, column 0. For more information about this error, please see http://datatables.net/tn/4
Create a new instance in v7.1 and migrate in the exact same way as I did on my preprod => fails with the same error...
I am really clueless on this :/
I just migrated Kiwi tcms from v7.1 on a server A to v8.0 on a server B (I virtualized our application server).
That is what is causing your problems. You are trying to restore one version of DB schema + data onto another version (on the second server). In between the 2 versions there are quite a lot of DB migrations and what you are trying to do will always lead to failure.
You can use server A and upgrade in place to the latest version, then dump the data, move to server B (with the same version) and restore the data into the clean DB and decommission server A.
You may also setup server B to have the older version of Kiwi TCMS, migrate the data there and then upgrade server B to v8.0.
No idea why it worked but I did the following, and everything seems to be fixed. My symptoms were exactly like those described here (7.3->8.0, mariaDB, pruned after 8.0, DataTables warning)
$ cd Kiwi
$ sudo docker exec -it kiwi_db bash
$ mysql -ukiwi -p -h 127.0.0.1 (pw kiwi from yml)
Then within mariaDB
use kiwi;
show tables;
describe testruns_testrun;
select * from testruns_testrun;
It was here I refreshed the page to pull out the ID value to look against when all of a sudden my test runs loaded. I did a
sudo docker-compose down && sudo docker-compose up -d
To verify the issue doesn't come back and it didn't. I'm no DBA, just poking around. I did make a new run from a plan, which worked until I navigated away, and tried searching for the test run. I did the above afterwards.

postgres major upgrade (9.5.x to 9.6.x) within same data space

I was trying to upgrade my postgres installation from 9.5.7 to 9.6.5
My postgres database production instance has several databases and consumed ~700 GB space till now.
pg_upgrade needs 2 different dir for old and new datadir.
pg_upgrade -b oldbindir -B newbindir -d olddatadir -D newdatadir
it needs a new directory to do the pg_upgrade where as I was able to run the above command in my local/stage database as my database size was small in comparison to prod and I observed the following in my local
sudo du -sh /var/lib/pgsql/data-9.5
64G /var/lib/pgsql/data-9.5
sudo du -sh /var/lib/pgsql/data-9.6
60G /var/lib/pgsql/data-9.6
and I was having sufficient free data space to do the interim pg_upgrade process in my local/stage and I did it successfully there.
While in production I have only ~300 GB free space.
However after the successful upgrade we will delete the /var/lib/pgsql/data-9.5 dir.
Is there any way to do the in-place data upgrade so that It will not need the same amount of extra space for interim pg_upgrade process ?
Run pg_upgrade
/usr/lib/postgresql/9.6/bin/pg_upgrade
-b /usr/lib/postgresql/9.5/bin/
-B /usr/lib/postgresql/9.6/bin/
-d /var/lib/pgsql/data-9.5/
-D /var/lib/pgsql/data/
--link --check
Performing Consistency Checks
-----------------------------
Checking cluster versions ok
Checking database user is the install user ok
Checking database connection settings ok
Checking for prepared transactions ok
Checking for reg* system OID user data types ok
Checking for contrib/isn with bigint-passing mismatch ok
Checking for roles starting with 'pg_' ok
Checking for presence of required libraries ok
Checking database user is the install user ok
Checking for prepared transactions ok
Clusters are compatible
Always run the pg_upgrade binary of the new server, not the old one. pg_upgrade requires the specification of the old and new cluster's data and executable (bin) directories. You can also specify user and port values, and whether you want the data linked instead of copied (the default).
If you use link mode, the upgrade will be much faster (no file copying) and use less disk space, but you will not be able to access your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the same file system. (Tablespaces and pg_xlog can be on different file systems.) See pg_upgrade --help for a full list of options.
Thanks to the postgres community comprehensive documentation which helped me a lot to find the solution after all.

Running different versions of postgresql side by side

I have postgresql 9.3 installed.
I would like to have also postgres 9.6.1 installed.
Each application is using a different DB. Most of the times I don't run both applications, so I don't need them to run concurrently.
I downloaded the installer recommended by postgres, and installed 9.6.1, but then it seems that 9.3 is not able to start anymore. I'm getting an error trying to run sudo service postgres start:
Starting PostgreSQL 9.3 database server
The PostgreSQL server failed to start. Please check the log output.
The log file is empty (not sure that's the interesting one) - /var/log/postgresql/postgresql-9.3-main.log
Any idea how to be able to run both instances?
You need to check the postgresql.conf config file.
If you want to run both instances at the same time then they will need to be run on different ports otherwise they will conflict. The default is 5432, change this for one of the DB's.
Then make sure that the data directory, log file are unique for each instance.

Upload Postgres db on an Amazon VM [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I've been given a database which I can't handle with my pc, because of little available storage and memory.
The person who gave me this db gave me the following details:
The compressed file is about 15GB, and uncompressed it's around
85-90GB. It'll take a similar amount of space once restored, so make
sure the machine that you restore it on has at least 220GB free to be
safe. Ideally, use a machine with at least 8GB RAM - although even our
modest 16GB RAM server can struggle with large queries on the tweet
table.
You'll need PostgreSQL 8.4 or later, and you'll need to create a
database to restore into with UTF8 encoding (use -E UTF8 when creating
it from the command-line). If this is a fresh PostgreSQL install, I
highly recommend you tweak the default postgresql.conf settings - use
the pgtune utility (search GitHub) to get some sane defaults for your
hardware. The defaults are extremely conservative, and you'll see
terrible query performance if you don't change them.
When I told him that my pc sort of sucks, he suggested me to use an Amazon EC2 instance.
My two issues are:
How do I upload the db to an Amazon VM?
How do I use it after that?
I'm completely ignorant regarding cloud services and databases as you can see. Any relevant tutorial will be highly appreciated.
If you're new to cloud hosting, rather than using EC2 directly consider using EnterpriseDB's cloud options. Details here.
If you want to use EC2 directly, sign up and create an instance.
Choose your preferred Linux distro image. I'm assuming you'll use Linux on EC2; if you want to use Windows that's because you probably already know how. Let the new VM provision and boot up, then SSH into it as per the documentation available on Amazon for EC2 and for that particular VM image. Perform any recommended setup for that VM image as per its documentation.
Once you've done the recommended setup for that instance, you can install PostgreSQL:
For Ubuntu, apt-get install postgresql
For Fedora, yum install postgresql
For CentOS, use the PGDG yum repository, not the outdated version of PostgreSQL provided.
You can now connect to Pg as the default postgres superuser:
sudo -u postgres psql
and are able to generally use PostgreSQL much the same way you do on any other computer. You'll probably want to make yourself a user ID and a new database to restore into:
echo "CREATE USER $USER;" | sudo -u postgres psql
echo "CREATE DATABASE thedatabase WITH OWNER $USER" | sudo -u postgres psql
Change "thedatabase" to whatever you want to call your db, of course.
The exact procedure for restoring the dump to your new DB depends on the dump format.
For pg_dump -Fc or PgAdmin-III custom-format dumps:
sudo -u postgres pg_restore --dbname thedatabase thebackupfile
See "man pg_restore" and the online documentation for details on pg_restore.
For plain SQL format dumps you will want to stream the dump through a decompression program then to psql. Since you haven't said anything about the dump file name or format it's hard to know what to do. I'll assume it's gzip'ed (".gz" file extension), in which case you'd do something like:
gzip -d thedumpfile.gz | sudo -u postgres psql thedatabase
If its file extension is ".bz2" change gzip to bzip2. If it's a .zip you'll want to unzip it then run psql on it using sudo -u postgres psql -f thedumpfilename.
Once restored you can connect to the db with psql thedatabase.

Resources