Wordpress database migration - database
I've looked around the Wordpress forums about this and didn't find anything so I thought I might try here.
If you have a staging/dev Wordpress setup used for testing new pluging and such, how do you go about migrating the data in the staging database back to the production database? Is there a "Wordpress best practices" way to do this, or am I limited to having to manually migrate tables from one database to the other?
I have a script that mysqldumps a copy of my production Wordpress DB, restores it over my test Wordpress install & then corrects all the "production" settings & urls in the test DB.
Both my production & test databases live on the same server, but you could change the mysqldump settings to dump from a remote mysql server & restore to a local server quite easily.
Here are my scripts:
overwrite_test.coach_db_with_coache_db.sh
#!/bin/bash
dbUser="co*******"
dbPassword="*****"
dbSource="coach_production"
dbDest="coach_test"
tmpDumpFile="/tmp/$dbSource.sql"
mysqldump --add-drop-table --extended-insert --user=$dbUser --password=$dbPassword --routines --result-file=$tmpDumpFile $dbSource
mysql --user=$dbUser --password=$dbPassword $dbDest < $tmpDumpFile
mysql --user=$dbUser --password=$dbPassword $dbDest < /AdminScripts/change_coach_to_test.coach.sql
change_coach_to_test.coach.sql
-- Change all db references from #oldDomain to #newDomain
SET #oldDomain = 'coach.co.za';
SET #newDomain = 'test.coach.co.za';
SET #testUsersPassword = 'password';
UPDATE `wp_1_options` SET `option_value` = REPLACE(`option_value`,#oldDomain,#newDomain) WHERE `option_name` IN ('siteurl','home','fileupload_url');
UPDATE `wp_1_posts` SET `post_content` = REPLACE(`post_content`,#oldDomain,#newDomain);
UPDATE `wp_1_posts` SET `guid` = REPLACE(`guid`,#oldDomain,#newDomain);
UPDATE `wp_blogs` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
UPDATE `wp_users` SET `user_pass` = MD5( #testUsersPassword );
-- Only valid for main wpmu site
UPDATE `wp_site` SET `domain` = #newDomain WHERE `domain` = #oldDomain;
Perhaps you are just looking for the wrong thing. Wouldn't a backup plugin handle this with ease? I know they exist for all the big CMS packages...
The two methods would be using the export/import feature under tools or copying the database. I email myself a copy of my production database weekly using the WordPress Database Backup plugin.
The import feature can be problematic for moving a wordpress blog as you have to configure your php.ini file often because the default value of files you can upload on a hosted php implementation tends to be too small by default.
I wanted to pull the database from my production wordpress website into an offline development copy of it on my desktop machine so I could modify the site and test it with a
full set of the existing blog content and history.
This proved to be problematic, as simply making an offline backup of the database and importing it into the local development database did not work.
Overcoming these problems in moving data from the production to the dev database can probably be used to go the other way as well - so I think you can just use these guidelines for what you want to do as well - just start with dev data and move it to prod.
The problems here were:
the permalink designations for the
blog posts are all stored in the
database as they would be for the
online version, but my offline copy
isn't at the domain address, instead
it is in the localhost directory. So
when I launch the site locally,
although the css formatting and
images are all in place (the image
links being relative), the actual
blog posts don't show up.
many of the links throughout the
site link back out to the internet,
so if you try to navigate to
archives, or comments, or
categories, or the main posts, you
get sent back out to the internet
instead of staying in the database
on the local machine.
To make sure I was doing this right, I blew away the wordpress install I had on my local machine and restarted from scratch.
Once I had a clean, new wordpress install and brand new default freshly created local database for it, I opened up the database in phpMyAdmin and took a look at the wp_posts
table. Inside there, each record (in other words, each post) has a column titled "guid", which shows the location of that post. For example, the first one in a fresh, default
install contains this "guid" value:
http://localhost/wordpress/?p=1
If you look in the wp_posts table of your online version, you'll see instead in this location the url to your site online.
You can't just import the tables wholesale into your local install, because you'll be importing all these outside references. It will make your local version impossible to navigate locally.
So, I created a backup copy of my online site's database and saved it locally as a .sql file. I then opened that file in a text editor (I used notepad++, a great piece of free software, but you could use any text editor). Things I needed to look out for:
For whatever reason, the tables on my
online site aren't just, for example,
"wp_posts" - they are
"wp_something_posts"... there are
some extra letters in there in the
table names.
Any references to http://... that contain my online url instead of localhost/wordpress
To keep it simple let's just do only the posts. In the backup copy of the .sql you've made of your online database, find the beginning of the wp_posts table. It will look something like this:
--
-- Table structure for table `wp_posts`
--
DROP TABLE IF EXISTS `wp_posts`;
CREATE TABLE `wp_posts` (
...and so on. Highlight everything above that up to just below the comment marking the beginning of the database at the top of the file (it will say -- Database: 'your database name') and delete it. Then go to the end of your wp_posts table, and delete everything after then end of it down to the bottom of the file. Now your file only contains your posts, and nothing else.
Save this as a separate document. Call it posts.sql or something like that.
Now, in this posts.sql file, you need to do two find/replaces actions.
Find every instance of the name of
the table wp_something_posts and
replace it with wp_posts. You only
need to do this if your backup copy
of your online database doesn't
match your clean local install as
far as the table names go. You want
whatever the table name is in this
file to match what your locally
installed wordpress database has as
this table name. If you don't make
these names match, you are just
going to end up importing the posts
into a new, differently named table,
which will be of no use to you at
all.
Find every instance of http://...
(replace the elipsis with your url)
and replace it with
http://localhost/wordpress (or
whatever the local url to your dev
version of the site is)
Now save this file again, to make sure you've got these changes set.
Now that you've done that, use phpMyAdmin to get into the wordpress database on your local machine, select the "import" tab and navigate the selector to the posts.sql file you just made, and then import it. This will pull all the data in that file into your local wp_posts table.
When that finishes, browse your local wordpress site. You'll see all your posts in there now. Hooray!
You may need to do something similar for a few other tables if you want to bring in your comments, tags, categories, and static pages you've created, etc.
I realize this is a convoluted process. There is probably a tool out there somewhere that makes this activity easier, and if someone knows of one I'd love to find out about it. If someone knows of a better way to do this manually than what I've described, I'd love to know that as well!
Until then, this is the way I figured out how to do it. Hopefully it helps get you going in the right direction.
You need to handle the serialized objects. Here is a client side HTML5 utility to handle it. Because it is all javascript, it's quite fast.
The alternative would be hooking a bash script into your deployment. So once the site is deployed, the db is backed up and deserialized with the new domain.
This about sums up the problems with the wordpress core architecture... but I wrote a plugin that solves the problems with domain names and absolute urls being stored in the database:
http://wordpress.org/extend/plugins/root-relative-urls/
This will solve the problems outlined by #oddbill. Though don't worry too much about the url being in the GUID column as that field is never used for link generation.
#markratledge provides a couple links to some lengthy documents that basically say this:
//export
mysqldump -u[username] -p[password] [database] > backup.sql
//import
mysql -u[username] -p[password] [database] < backup.sql
You'll want to exclude the comments/comments_meta tables if you push to production from staging so you don't lose all of your comments and trackbacks (#DavidLaing's approach will wipe those out.) And this assumes you only make content changes in your staging environment. If you want to make changes in production and your staging environment, you'll need to write scripts that sync the data instead of wholesale overwrite it... good luck on that task, may I suggest adding create & modified timestamp columns before you invest too much time with the current schema.
And finally, #RussellStuever's approach is suitable in most circumstances, just be sure to know when you are browsing your host mapped site versus your production site. And really be sure about it, because some browsers cache dns lookups for days until you physically close them and start a new process. At which point switching hosts may take some time, and switching frequently may get frustrating. And if you need to test with an iPhone, you'll need to publish the site live first, or use a good router that can remap outbound internet requests to local servers because you cannot modify hosts files on most mobile devices.
My plugin lets you develop and test from http://localhost/ or http://staging.server.local/ or http://www.production.com without any of the usual pitfalls. And then to migrate data, it's as simple as exporting and importing the data, no search & replace step or database setting tweaks necessary.
And don't rely on the import/export tool, it doesn't capture everything in typical wordpress installations, and still requires a needless search & replace step.
Related
how can remove Plugin settings stored in wordpress database?
I try to totally remove Podcasting Plugin by TSG since it generated thousand of useless lines in its settings and in my blog home page. But if deleting its files is not enough since all settings are stored in wordpress database options table. Mu question is: how could I delete all these settings in order to start from scratch a new installation? Thanks.
You need to go into the database directly. If you're using a MySQL administration tool (e.g. phpmyadmin), go there, search for the settings that it has changed (e.g. if the plugin changed a the web address for the site, you can manually change it back to what you want). If you don't happen to know exactly what it changed, since it may be difficult to exactly know, if you have saved a previous version of the MySQL database, you could start over and import the old .sql file that you have saved.
Can an Oracle test database import be done without adversely affecting the production Oracle instances?
On a production database, 11gR2, I have exported everything via Sql Developer, file.sql. I just took all the defaults. I have a test server with 11gR2 I am going to copy the .sql dump file over to. Is there anything contained in a export, the one with everything in it, all the objects, data, and so on, that would cause problems for the production environment when I import the data into the test environment? In other words, I don't want to break my production. I don't have tnsnames.ora set up on my test. I only want the schema, data, all the rest mentioned. EDIT: SELECT * FROM DBA_SCHEDULER_WINDOWS; Showed nothing active. DBA_JOBS shows APEX jobs about the mail, stock jobs I think. One about, EMD_MAINTENANCE.EXECUTE_EM_DBMS_JOB_PROCS(); SELECT * FROM DBA_DB_LINKS; There is a link. But, I know what it is from and it is no longer being used. Thanks for the info you gave. I feel better now.
The standard things I would think of are: use another system (not the same VM/server as production) disable all DBA_SCHEDULER_JOBS and don't enable them back until you review their code disable all DBA_JOBS and don't enable them back until you review their code point DBA_DB_LINKS (both public and private database links) from the production databases to corresponding test databases or delete them; these sometimes use tnsnames.ora, but sometimes bypass it.
Web2Py tables broken migration
I tried adding a new table into one of my models in Web2Py. In addition I added a new field to an existing table. I tried loading a page that used those tables and it didn't work, claimed those things don't exist. Okay so I get migrations to False here. db = DAL('sqlite://storage.sqlite',pool_size=1,check_reserved=['all'], migrate = False) Reloaded the page, no change. Then I tried doing something like this in the tables it wouldn't understand db.define_table(....,migrate=False,fake_migrate=True) and I changed the DAL call to be db = DAL(...,fake_migrate_all=True) As the web2py manual said. Still no change. So then I said well okay I will have to dumb the whole database. So I took everything out of my database folder and I tried to reload it with a clean slate. Now it just doesn't load at all. According to database administration none of the tables exist although if I check again in the database folder they are all there. If I try to load the application it immediately reports that none of my called tables exist. I have all the code backed up on a repo but I can't uninstall the current app because I don't have that kind of read access on the server this is running on. Is there anything I can do? Edit: By the way this is happening on SQLite
Have you already tried, besides dumbs DB, clean up the database folder? If you do not do this, web2py will goes insane, cuz the files says that there are tables, but db don't. Besides, take a look in here, about fixing broken migrations and some caveats about sqllite.
Empty my Sqlite3 database in RoR
I am working on a Ruby on Rails 3 web application using sqlite3. I have been testing my application on-the-fly creating and destroying things in the Database, sometimes through the new and edit actions and sometimes through the Rails console. I am interested in emptying my Database totally and having only the empty tables left. How can I achieve this? I am working with a team so I am interested in two answers: 1) How do I empty the Database only by me? 2) How can I (if possible empty) it by the others (some of which are not using sqlite3 but MySql)? (we are all working on an the same project through a SVN repository)
To reset your database, you can run: rake db:schema:load Which will recreate your database from your schema.rb file (maintained by your migrations). This will additionally protect you from migrations that may later fail due to code changes. Your dev database should be distinct to your environment - if you need certain data, add it to your seed.rb file. Don't share a dev database, as you'll quickly get into situations where other changes make your version incompatible.
Download sqlitebrower here http://sqlitebrowser.org/ Install it, run it, click open database (top left) to locationOfYourRailsApp/db/development.sqlite3 Then switch to Browse data tab, there you can delete or add data.
I found that by deleting the deployment.sqlite3 file from the db folder and inserting the command rake db:migrate in the command line, it solves the problem for all of my team working on sqlite3.
As far as I know there is no USER GRANT management in sqlite so it is difficult to control access. You only can protect the database by file access. If you want to use an empty database for test purpose. Generate it once and copy the file somewhere. and use a copy of this file just before testing.
How can I put a database under git (version control)?
I'm doing a web app, and I need to make a branch for some major changes, the thing is, these changes require changes to the database schema, so I'd like to put the entire database under git as well. How do I do that? is there a specific folder that I can keep under a git repository? How do I know which one? How can I be sure that I'm putting the right folder? I need to be sure, because these changes are not backward compatible; I can't afford to screw up. The database in my case is PostgreSQL Edit: Someone suggested taking backups and putting the backup file under version control instead of the database. To be honest, I find that really hard to swallow. There has to be a better way. Update: OK, so there' no better way, but I'm still not quite convinced, so I will change the question a bit: I'd like to put the entire database under version control, what database engine can I use so that I can put the actual database under version control instead of its dump? Would sqlite be git-friendly? Since this is only the development environment, I can choose whatever database I want. Edit2: What I really want is not to track my development history, but to be able to switch from my "new radical changes" branch to the "current stable branch" and be able for instance to fix some bugs/issues, etc, with the current stable branch. Such that when I switch branches, the database auto-magically becomes compatible with the branch I'm currently on. I don't really care much about the actual data.
Take a database dump, and version control that instead. This way it is a flat text file. Personally I suggest that you keep both a data dump, and a schema dump. This way using diff it becomes fairly easy to see what changed in the schema from revision to revision. If you are making big changes, you should have a secondary database that you make the new schema changes to and not touch the old one since as you said you are making a branch.
I'm starting to think of a really simple solution, don't know why I didn't think of it before!! Duplicate the database, (both the schema and the data). In the branch for the new-major-changes, simply change the project configuration to use the new duplicate database. This way I can switch branches without worrying about database schema changes. EDIT: By duplicate, I mean create another database with a different name (like my_db_2); not doing a dump or anything like that.
Use something like LiquiBase this lets you keep revision control of your Liquibase files. you can tag changes for production only, and have lb keep your DB up to date for either production or development, (or whatever scheme you want).
Irmin (branching + time travel) Flur.ee (immutable + time travel + graph query) XTDB (formerly called 'CruxDB') (time travel + query) TerminusDB (immutable + branching + time travel + Graph Query!) DoltDB (branching + time-travel + SQL query) Quadrable (branching + remote state verification) EdgeDB (no real time travel, but migrations derived by the compiler after schema changes) Migra (diffing for Postgres schemas/data. Auto-generate migration scripts, auto-sync db state) ImmuDB (immutable + time-travel)
I've come across this question, as I've got a similar problem, where something approximating a DB based Directory structure, stores 'files', and I need git to manage it. It's distributed, across a cloud, using replication, hence it's access point will be via MySQL. The gist of the above answers, seem to similarly suggest an alternative solution to the problem asked, which kind of misses the point, of using Git to manage something in a Database, so I'll attempt to answer that question. Git is a system, which in essence stores a database of deltas (differences), which can be reassembled, in order, to reproduce a context. The normal usage of git assumes that context is a filesystem, and those deltas are diff's in that file system, but really all git is, is a hierarchical database of deltas (hierarchical, because in most cases each delta is a commit with at least 1 parents, arranged in a tree). As long as you can generate a delta, in theory, git can store it. The problem is normally git expects the context, on which it's generating delta's to be a file system, and similarly, when you checkout a point in the git hierarchy, it expects to generate a filesystem. If you want to manage change, in a database, you have 2 discrete problems, and I would address them separately (if I were you). The first is schema, the second is data (although in your question, you state data isn't something you're concerned about). A problem I had in the past, was a Dev and Prod database, where Dev could take incremental changes to the schema, and those changes had to be documented in CVS, and propogated to live, along with additions to one of several 'static' tables. We did that by having a 3rd database, called Cruise, which contained only the static data. At any point the schema from Dev and Cruise could be compared, and we had a script to take the diff of those 2 files and produce an SQL file containing ALTER statements, to apply it. Similarly any new data, could be distilled to an SQL file containing INSERT commands. As long as fields and tables are only added, and never deleted, the process could automate generating the SQL statements to apply the delta. The mechanism by which git generates deltas is diff and the mechanism by which it combines 1 or more deltas with a file, is called merge. If you can come up with a method for diffing and merging from a different context, git should work, but as has been discussed you may prefer a tool that does that for you. My first thought towards solving that is this https://git-scm.com/book/en/v2/Customizing-Git-Git-Configuration#External-Merge-and-Diff-Tools which details how to replace git's internal diff and merge tool. I'll update this answer, as I come up with a better solution to the problem, but in my case I expect to only have to manage data changes, in-so-far-as a DB based filestore may change, so my solution may not be exactly what you need.
There is a great project called Migrations under Doctrine that built just for this purpose. Its still in alpha state and built for php. http://docs.doctrine-project.org/projects/doctrine-migrations/en/latest/index.html
Take a look at RedGate SQL Source Control. http://www.red-gate.com/products/sql-development/sql-source-control/ This tool is a SQL Server Management Studio snap-in which will allow you to place your database under Source Control with Git. It's a bit pricey at $495 per user, but there is a 28 day free trial available. NOTE I am not affiliated with RedGate in any way whatsoever.
I've released a tool for sqlite that does what you're asking for. It uses a custom diff driver leveraging the sqlite projects tool 'sqldiff', UUIDs as primary keys, and leaves off the sqlite rowid. It is still in alpha so feedback is appreciated. Postgres and mysql are trickier, as the binary data is kept in multiple files and may not even be valid if you were able to snapshot it. https://github.com/cannadayr/git-sqlite
I want to make something similar, add my database changes to my version control system. I am going to follow the ideas in this post from Vladimir Khorikov "Database versioning best practices". In summary i will store both its schema and the reference data in a source control system. for every modification we will create a separate SQL script with the changes In case it helps!
You can't do it without atomicity, and you can't get atomicity without either using pg_dump or a snapshotting filesystem. My postgres instance is on zfs, which I snapshot occasionally. It's approximately instant and consistent.
I think X-Istence is on the right track, but there are a few more improvements you can make to this strategy. First, use: $pg_dump --schema ... to dump the tables, sequences, etc and place this file under version control. You'll use this to separate the compatibility changes between your branches. Next, perform a data dump for the set of tables that contain configuration required for your application to operate (should probably skip user data, etc), like form defaults and other data non-user modifiable data. You can do this selectively by using: $pg_dump --table=.. <or> --exclude-table=.. This is a good idea because the repo can get really clunky when your database gets to 100Mb+ when doing a full data dump. A better idea is to back up a more minimal set of data that you require to test your app. If your default data is very large though, this may still cause problems though. If you absolutely need to place full backups in the repo, consider doing it in a branch outside of your source tree. An external backup system with some reference to the matching svn rev is likely best for this though. Also, I suggest using text format dumps over binary for revision purposes (for the schema at least) since these are easier to diff. You can always compress these to save space prior to checking in. Finally, have a look at the postgres backup documentation if you haven't already. The way you're commenting on backing up 'the database' rather than a dump makes me wonder if you're thinking of file system based backups (see section 23.2 for caveats).
What you want, in spirit, is perhaps something like Post Facto, which stores versions of a database in a database. Check this presentation. The project apparently never really went anywhere, so it probably won't help you immediately, but it's an interesting concept. I fear that doing this properly would be very difficult, because even version 1 would have to get all the details right in order to have people trust their work to it.
This question is pretty much answered but I would like to complement X-Istence's and Dana the Sane's answer with a small suggestion. If you need revision control with some degree of granularity, say daily, you could couple the text dump of both the tables and the schema with a tool like rdiff-backup which does incremental backups. The advantage is that instead of storing snapshots of daily backups, you simply store the differences from the previous day. With this you have both the advantage of revision control and you don't waste too much space. In any case, using git directly on big flat files which change very frequently is not a good solution. If your database becomes too big, git will start to have some problems managing the files.
Here is what i am trying to do in my projects: separate data and schema and default data. The database configuration is stored in configuration file that is not under version control (.gitignore) The database defaults (for setting up new Projects) is a simple SQL file under version control. For the database schema create a database schema dump under the version control. The most common way is to have update scripts that contains SQL Statements, (ALTER Table.. or UPDATE). You also need to have a place in your database where you save the current version of you schema) Take a look at other big open source database projects (piwik,or your favorite cms system), they all use updatescripts (1.sql,2.sql,3.sh,4.php.5.sql) But this a very time intensive job, you have to create, and test the updatescripts and you need to run a common updatescript that compares the version and run all necessary update scripts. So theoretically (and thats what i am looking for) you could dumped the the database schema after each change (manually, conjob, git hooks (maybe before commit)) (and only in some very special cases create updatescripts) After that in your common updatescript (run the normal updatescripts, for the special cases) and then compare the schemas (the dump and current database) and then automatically generate the nessesary ALTER Statements. There some tools that can do this already, but haven't found yet a good one.
What I do in my personal projects is, I store my whole database to dropbox and then point MAMP, WAMP workflow to use it right from there.. That way database is always up-to-date where ever I need to do some developing. But that's just for dev! Live sites is using own server for that off course! :)
Storing each level of database changes under git versioning control is like pushing your entire database with each commit and restoring your entire database with each pull. If your database is so prone to crucial changes and you cannot afford to loose them, you can just update your pre_commit and post_merge hooks. I did the same with one of my projects and you can find the directions here.
That's how I do it: Since your have free choise about DB type use a filebased DB like e.g. firebird. Create a template DB which has the schema that fits your actual branch and store it in your repository. When executing your application programmatically create a copy of your template DB, store it somewhere else and just work with that copy. This way you can put your DB schema under version control without the data. And if you change your schema you just have to change the template DB
We used to run a social website, on a standard LAMP configuration. We had a Live server, Test server, and Development server, as well as the local developers machines. All were managed using GIT. On each machine, we had the PHP files, but also the MySQL service, and a folder with Images that users would upload. The Live server grew to have some 100K (!) recurrent users, the dump was about 2GB (!), the Image folder was some 50GB (!). By the time that I left, our server was reaching the limit of its CPU, Ram, and most of all, the concurrent net connection limits (We even compiled our own version of network card driver to max out the server 'lol'). We could not (nor should you assume with your website) put 2GB of data and 50GB of images in GIT. To manage all this under GIT easily, we would ignore the binary folders (the folders containing the Images) by inserting these folder paths into .gitignore. We also had a folder called SQL outside the Apache documentroot path. In that SQL folder, we would put our SQL files from the developers in incremental numberings (001.florianm.sql, 001.johns.sql, 002.florianm.sql, etc). These SQL files were managed by GIT as well. The first sql file would indeed contain a large set of DB schema. We don't add user-data in GIT (eg the records of the users table, or the comments table), but data like configs or topology or other site specific data, was maintained in the sql files (and hence by GIT). Mostly its the developers (who know the code best) that determine what and what is not maintained by GIT with regards to SQL schema and data. When it got to a release, the administrator logs in onto the dev server, merges the live branch with all developers and needed branches on the dev machine to an update branch, and pushed it to the test server. On the test server, he checks if the updating process for the Live server is still valid, and in quick succession, points all traffic in Apache to a placeholder site, creates a DB dump, points the working directory from 'live' to 'update', executes all new sql files into mysql, and repoints the traffic back to the correct site. When all stakeholders agreed after reviewing the test server, the Administrator did the same thing from Test server to Live server. Afterwards, he merges the live branch on the production server, to the master branch accross all servers, and rebased all live branches. The developers were responsible themselves to rebase their branches, but they generally know what they are doing. If there were problems on the test server, eg. the merges had too many conflicts, then the code was reverted (pointing the working branch back to 'live') and the sql files were never executed. The moment that the sql files were executed, this was considered as a non-reversible action at the time. If the SQL files were not working properly, then the DB was restored using the Dump (and the developers told off, for providing ill-tested SQL files). Today, we maintain both a sql-up and sql-down folder, with equivalent filenames, where the developers have to test that both the upgrading sql files, can be equally downgraded. This could ultimately be executed with a bash script, but its a good idea if human eyes kept monitoring the upgrade process. It's not great, but its manageable. Hope this gives an insight into a real-life, practical, relatively high-availability site. Be it a bit outdated, but still followed.
Update Aug 26, 2019: Netlify CMS is doing it with GitHub, an example implementation can be found here with all information on how they implemented it netlify-cms-backend-github
I say don't. Data can change at any given time. Instead you should only commit data models in your code, schema and table definitions (create database and create table statements) and sample data for unit tests. This is kinda the way that Laravel does it, committing database migrations and seeds.
I would recommend neXtep (Link removed - Domain was taken over by a NSFW-Website) for version controlling the database it has got a good set of documentation and forums that explains how to install and the errors encountered. I have tested it for postgreSQL 9.1 and 9.3, i was able to get it working for 9.1 but for 9.3 it doesn't seems to work.
Use a tool like iBatis Migrations (manual, short tutorial video) which allows you to version control the changes you make to a database throughout the lifecycle of a project, rather than the database itself. This allows you to selectively apply individual changes to different environments, keep a changelog of which changes are in which environments, create scripts to apply changes A through N, rollback changes, etc.
I'd like to put the entire database under version control, what database engine can I use so that I can put the actual database under version control instead of its dump? This is not database engine dependent. By Microsoft SQL Server there are lots of version controlling programs. I don't think that problem can be solved with git, you have to use a pgsql specific schema version control system. I don't know whether such a thing exists or not...
Use a version-controlled database, of which there are now several. https://www.dolthub.com/blog/2021-09-17-database-version-control/ These products don't apply version control on top of another type of database -- they are their own database engines that support version control operations. So you need to migrate to them or start building on them in the first place. I write one of them, DoltDB, which combines the interfaces of MySQL and Git. Check it out here: https://github.com/dolthub/dolt
I wish it were simpler. Checking in the schema as a text file is a good start to capture the structure of the DB. For the content, however, I have not found a cleaner, better method for git than CSV files. One per table. The DB can then be edited on multiple branches and merges extremely well.