I'm a new to PostgreSQL and I'm looking to backup the database. I understand that there are 3 methods pg_dump, snapshot and copy and using WAL. Which one do you suggest for full backup of the database? If possible, provide code snippets.
It depends a lot more on your operational requirements than anything else.
All three will require shelling out to an external program. libpq doesn't provide those facilities directly; you'll need to invoke the pg_basebackup or pg_dump via execv or similar.
All three have different advantages.
Atomic snapshot based backups are useful if the filesystem supports them, but become useless if you're using tablespaces since you then need a multivolume atomic snapshot - something most systems don't support. They can also be a pain to set up.
pg_dump is simple and produces compact backups, but requires more server resources to run and doesn't support any kind of point-in-time recovery or incremental backup.
pg_basebackup + WAL archiving and PITR is very useful, and has a fairly low resource cost on the server, but is more complex to set up and manage. Proper backup testing is imperative.
I would strongly recommend allowing the user to control the backup method(s) used. Start with pg_dump since you can just invoke it as a simple command line and manage a single file. Use the -Fc mode and pg_restore to restore it where needed. Then explore things like configuring the server for WAL archiving and PITR once you've got the basics going.
Related
I am just wondering if this is the safest way, in terms of the database, to copy my production setup to a development environment?
ssh user#app.com pg_dump app-production | psql app-development
I just want to make sure that this command doesn't or can't have any unintended side effects on the database being dumped.
It will impose a considerable load on the production to read all of the data from disk and send it over the net. It will also lock each object, sometimes in ways that can potentially interfere with the operation of the production system.
I think the least-impact method is to hook into whatever backup system you already have in place for the production system. If you use pg_dump for your backup, restore from the most recent one of those without touching production at all. If you use wal archiving for your backup, "restore" from that to get your clone, again without touching production at all.
It won't make any changes to the production database, however it might have a noticeable effect on production database performance.
It will increase the general load as its obviously going to access all the tables and the large objects.
However, the thing I'd be more concerned about is the way you're using the network. By piping direct through the connection you're relying on an open network connection throughout the process of the pg_dump and also keeping the access open until the load is completed at app-development.
Also, if there is a network drop or anything, you'd have to restart completely.
I'd recommend you dump to a file if you can. Something like
pg_dump -Fc --file=app-production.backup app-production
And then transfer app-production.backup with sftp to your dev box.
That way you can utilise the custom format "-Fc" which compresses the data so your ssh hit will be smaller. Also once you sftp the file to your local dev box, you can then load, reload, reload again as often as you want without revisiting your production database.
PG Dump documentation
I got some project using PostgreSQL database in legacy, and it uses 19 stored procedures (functions), and some 70 views.
Now, we did some update on live database, and as functions were changed, due to postgres limitation and need to drop and recreate all functions and views, we spent quite some time to do that.
Is there an automated way of changing functions and views in postgress in a way that it takes care about dependencies and do it in proper order.
We have basic views that then create upper level views ... its a bit complex database, at least for me :)
Thanks
I think the easiest way to do this is to backup a database to a text file:
pg_dump database_name > database_name.pg_dump
They'll be in a proper dependency order, as otherwise restoring a database from backup would be hard. You can edit function and view definitions in the backup file and restore it back to new database.
If database backup file is too big to be edited in your editor, from Postgres 9.2, you can split it to 3 sections:
pg_dump --section=pre-data > database_name.1.pg_dump
pg_dump --section=data > database_name.2.pg_dump
pg_dump --section=post-data > database_name.3.pg_dump
You'll edit only the first section, which will be small. In older versions you could use for example split utility.
If you cannot afford downtime required for backup and restore it gets trickier. But I'll still recommend working with backup file. Remember that Postgres supports DDL in transactions — if you import functions and views in a transaction and there'll be an error, you can simply rollback all changes, make corrections and try again.
There is no "easy" way. The best approach IMO is to be prepared first and the set up a way to do this using SQL scripts and version control.
What we do in LedgerSMB is we keep the function definitions in a series of .sql files, which are tracked in subversion. We then have a script that reloads them. This will take some work to set up if you haven't done so before. The easiest way to do this is:
pg_dump -s > ddl_statements_for_mydb.sql
Then you can copy/paste the function definitions (change CREATE to CREATE OR REPLACE or add a DROP IF EXISTS where appropriate). Then you will want to modularize into usable chunks and have a script that reloads all chunks in the right order into your db. the time and effort that goes into setting up everything now will save many times that in the future because you can apply changes in a predictable way to testing, staging, and production accounts with no appreciable downtime (perhaps even no downtime at all depending on how you structure it).
I've outgrown the Sql Server custom actions available in WiX, so I'm taking the bold step of creating my own using Deployment Tools Foundation. I want to be a good citizen and make sure that mine support rollback. But what's the best way of doing it?
I need to support SQL Server 2005 and later, all editions.
The problem, as I see it, is that Windows Installer works in two phases: it does the work, storing undo information as it goes. Then, when all the pieces are in place it either commits (deleting the undo information) or does a rollback.
This means that standard transactions won't do the job. They would have to be completed inside my Execute custom action, and I wouldn't get a chance to roll them back later.
I've considered taking a copy-only backup of the database that I can restore in the rollback action if necessary but I think this approach, whilst simple has shortcomings. I don't know how big our databases will get, for example - so I can't guarantee that there will be space available to hold the backup on the target machine. Also, backup and restore can take a while to complete, and I don't want typical installs (where rollback doesn't happen) to be unnecessarily slow.
So that brings me to my current favoured idea: make sure the Distributed Transaction Coordinator is started up, then initialise a Distributed Transaction before making changes, then either committing it or rolling it back in the appropriate custom actions.
It seems I can uses the members of the TransactionInterop class to export a cookie that will enable me to share the transaction between my different custom actions.
Can anyone with experience of this kind of thing say if it is likely to work?
Some database/instance operations cannot be done inside a transaction (eg. CREATE/ALTER/DROP ENDPOINT), and other operations cannot be done inside a distributed transaction (eg. SAVE TRANSACTION). So you won't be able to do them at all in your proposed plan. Also your DB upgrade scripts will have to all work correctly when run inside an uncommitted transaction.
I would say that there are fewer risks of going down the backup/restore path (or alternatively creating a database snapshot and restoring from the snapshot on rollback, with the drawback of requiring EE).
Also an option is to have an undo script for every do script run during upgrade, and have the undo script run during rollback and remove the effects of the installation. I understand that this is a hard problem, probably doubles the amount of scripts that have to be developed (and tested...) and requires some serious developer discipline.
I've done quite a few installers with SQL scripts over the years and I've kind of come to the opinion that it's only suited for simple databases like here's my VB app with a local MSDE / MySQL database or here's my local store for code table lookups and temporary commits while we wait to sync it somewhere else.
Once you get into industrial strength heavy lifting enterprise app type situations I like to get my DB configuration out of the installer and into the application as a first run type story. You can do a lot heavier lifting with C# there and not be constrained by MSI.
I need to do some structural changes to a database (alter tables, add new columns, change some rows etc) but I need to make sure that if something goes wrong i can rollback to initial state:
All needed changes are inside a SQL script file.
I don't have administrative access to database.
I really need to ensure the backup is done on server side since the BD has more than 30 GB of data.
I need to use sqlplus (under a ssh dedicated session over a vpn)
Its not possible to use "flashback database"! It's off and i can't stop the database.
Am i in really deep $#$%?
Any ideas how to backup the database using sqlplus and leaving the backup on db server?
better than exp/imp, you should use rman. it's built specifically for this purpose, it can do hot backup/restore and if you completely screw up, you're still OK.
One 'gotcha' is that you have to backup the $ORACLE_HOME directory too (in my experience) because you need that locally stored information to recover the control files.
a search of rman on google gives some VERY good information on the first page.
An alternate approach might be to create a new schema that contains your modified structures and data and actually test with that. That presumes you have enough space on your DB server to hold all the test data. You really should have a pretty good idea your changes are going to work before dumping them on a production environment.
I wouldn't use sqlplus to do this. Take a look at export/import. The export utility will grab the definitions and data for your database (can be done in read consistent mode). The import utility will read this file and create the database structures from it. However, access to these utilities does require permissions to be granted, particularly if you need to backup the whole database, not just a schema.
That said, it's somewhat troubling that you're expected to perform the tasks of a DBA (alter tables, backup database, etc) without administrative rights. I think I would be at least asking for the help of a DBA to oversee your approach before you start, if not insisting that the DBA (or someone with appropriate privileges and knowledge) actually perform the modifications to the database and help recover if necessary.
Trying to back up 30GB of data through sqlplus is insane, It will take several hours to do and require 3x to 5x as much disk space, and may not be possible to restore without more testing.
You need to use exp and imp. These are command line tools designed to backup and restore the database. They are command line tools, which if you have access to sqlplus via your ssh, you have access to imp/exp. You don't need administrator access to use them. They will dump the database (with al tables, triggers, views, procedures) for the user(s) you have access to.
The backup and restore process of a large database or collection of databases on sql server is very important for disaster & recovery purposes. However, I have not found a robust solution that will guarantee the whole process is as efficient as possible, 100% reliable and easily maintainable and configurable accross multiple servers.
Microsft's Maintenance Plans doesn't seem to be sufficient. The best solution I have used is one that I created manually using many jobs with many steps per database running on the source server (backup) and destination server (restore). The jobs use stored procedures to do the backup, copying & restoring. This runs once a day (full backup/restore) and intraday every 5 mins (transaction log shipping).
Although my current process works and reports any job failures via email, I know the whole process isn't very reliable and cannot be easily maintained/configured on all our servers by a non-DBA without having in-depth knowledge of the process.
I would like to know if others have this same backup/restore process and how others overcome this issue.
I've used a similar step to keep dev/test/QA databases 'zero-stepped' on a nightly basis for developers and QA folks to use.
Documentation is the key - if you want to remove what Scott Hanselman calls 'bus factor' (i.e. the danger that the creator of the system will get hit by a bus and everything starts to suck).
That said, for normal database backups and disaster recovery plans, I've found that SQL Server Maintenance Plans work out pretty well. As long as you include:
1) Decent documentation
2) Routine testing.
I've outlined some of the ways to go about doing that (for anyone drawn to this question looking for an example of how to go about creating a disaster recovery plan):
SQL Server Backup Best Practices (Free Tutorial/Video)
The key part of your question is the ability for the backup solution to be managed by a non-DBA. Any native SQL Server answer like backup scripts isn't going to meet that need, because backup scripts require T-SQL knowledge.
Because of that, you want to look toward third-party solutions like the ones Mitch Wheat mentioned. I work for Quest (the makers of LiteSpeed) so of course I'm partial to that one - it's easy to show to non-DBAs. Before I left my last company, I had a ten minute session to show the sysadmins and developers how the LiteSpeed console worked, and that was that. They haven't called since.
Another approach is using the same backup software that the rest of your shop uses. TSM, Veritas, Backup Exec and Microsoft DPM all have SQL Server agents that let your Windows admins manage the backup process with varying degrees of ease-of-use. If you really want a non-DBA to manage it, this is probably the most dead-easy way to do it, although you sacrifice a lot of performance that the SQL-specific backup tools give you.
I am doing precisely the same thing and have various issues semi regularly even with this process.
How do you handle the spacing between copying the file from Server A to Server B and restoring the transactional backup on Server B.
Every once in a while the transaction backup is larger than normal and takes a longer time to copy. The restore job then gets an operating system error that the file is in use.
This is not such a big deal since the file is automatically applied the next time around however it would be nicer to have a more elegant solution in general and one that specifically fixes this issue.