We are multiple devs working on a project with MariaDb backend.
We would like to have revisions for our db schema changes & put this in source control.
Is there a way/tool to compare MariaDb database schemas & script these changes?
I know DbForge offers support for MariaDb, but Im looking for a free alternative to this tool.
Thanks
Make it simple. Dump schema with mysqldump tool(if I remember currectly, MariaDB has similar utils names) and save it in git/hg/svn.
mysqldump -u root -p --no-data dbname > schema.sql
It will create SQL query to create table, which will contain same format and every field will be on new line, so you can easily compare and make diffs in any tool to control versions.
There could be only one problem: commas. For example, if you have added new field, it will be added last, but previous will be changed - it will contain comma in the end at schema.sql, but it is common problem with any version-control tool, anyway you can find out more info by using diffs.
to compare the schema of two MariaDB databases I'd suggest to use:
TiCodeX SQL Schema Compare (https://www.ticodex.com).
It also gives you the migration script to update the destination database in case there are differences.
It's a nice tools that runs in Windows, Linux and Mac and can compare the schema of MS-SQL, MySQL, MariaDB and PostgreSQL database. Easy to use and effective. It may help you.
It's worth to mention that is the only tool I've found that also works nicely on Linux and MacOS.
Related
how to create a synonym for a table of a schema in postgresql database on a server into a schema of oracle database on another server?
I have a schema on oracle database on a server and want to create a synonym a table present in a schema of postgresql database on another server.
To create the synonym, we need to have database remote link between these two databases present on two different servers.
How can we do this? Please provide me one solution.
Just to clarify, I believe the question is trying to figure out how to get PostgreSQL data to appear as a table inside of Oracle. (The existing comments seem to be reading it the other way around, in which case, yes, an FDW would be the solution, but in this case that will not work).
In the past (on older versions of Oracle) when we needed this we were forced to build custom replication scripts to transfer data from Postgres into Oracle systems. For a single table, it is pretty straightforward to do with something like Perl & DBI... feel free to substitute that with your favorite scripting language.
On newer Oracle systems, I believe you can use Oracle Database Gateway to accomplish this. I am not sure if they support Postgres directly, but they do support ODBC (and I think JDBC) which should work. Here is an example blog post setting this up with MSSQL (http://oracle-help.com/oracle-database/installation-oracle-database-gateway/), the process should be similar for Postgres.
Hope this helps!
I have strange problem.
I tried to move database from one server to another using pgAdmin III.
Database was created on server with PostgreSQL 8.4.9 and I wanted to move it on second server with PostgreSQL 8.2.11.
To do It, I used "backup" option and saved file, after that I used "restore" option on new database. Tables are loaded but there aren't any functions in new database.
Maybe it is because of different postgreSQL versions?
Does anyone know the reason? Any solution?
If the functions aren't around, double-check that plpgsql is available as a language. It's available by default nowadays, but making it available used to require a create language statement.
That said, I'd echo the comments: you really should be upgrading to a 9.x Postgres version that is still supported, rather than downgrading from an unsupported version to one that is even older.
I'd recommend to do it via pg_dump from an interactive session and export the complete database to one ore more sql files. There you can use the -s switch to have only the schema which should include created functions. Having this SQL file, you can also easier backport your changes or debug if something not applying to the old fallow.
I am starting to build a new database for my project using PostgreSQL. (I am new to PostgreSQL and database by the way.)
I think my development workflow is very bad, and here is a part of it:
create table/view/function with pgAdmin.
determine the name of the file before saving the code.
The goal is to be able to recreate the database automatically by running all the saved scripts,
I need to know the order to run these scripts for dependency reason.
So I add a number for each file indicating the order.
for example: 001_create_role_user.ddl, 002_create_database_project.ddl, 013_user_table.ddl
save the code.
commit the file to repository using GIT.
Here are some bads I can think of:
I can easily forget what changes I made. For example, created a new type,
or edited comment
It is hard to determine a name (order) for the file.
Change the code would be a pain in the ass, especially when the new
code changes the order.
So my workflow is bad. I was wondering what other Postgres developers' workflow looks like.
Are there any good tools (free or cheap) for editing and saving scripts? good IDE maybe?
It would be great if I can create automated unit tests for the database.
Any tool for recreating the database? CI server tool?
Basically I am looking for any advice, good practice, or good tool for database development.
(Sorry, this question may not fit for the Q&A format, but I do not know where else to ask this question.)
Check out liquibase. We use it in the company I work at to setup our PostgreSQL database. It's open source, easy to use and the changelog file you end up with can be added to source control. Each changeset gets an id, so that each changeset is only run once. You end up with two extra tables for tracking the changes to the database when it's run.
While it's DB agnostic, you can use PostgreSQL SQL directly in each changeset and each changeset can have it's own comments.
The only caveat from having used it is that you have to caution yourself and others not to re-use a changeset once it's been applied to a database. Any changes to an already applied changeset result in a different checksum (even whitespace) which can cause liquibase to abort it's updates. This can end up in failed DB updates in the field, so each update to any of the changelogs should be tested locally first. Instead all changes, however minor should be inserted into a new changeset with a new id. They have a changeset sub-tag called "validCheckSum" to let you work around this, but I think it's better to try to enforce always making a new changeset.
Here are the doc links for creating a table and creating a view for example.
Well, your question is actually quite relevant to any database developer, and, if I understand it correctly, there is another way to get to your desired results.
One interesting thing to mention is that your idea of separating different changes into different files is the concept of migrations of Ruby On Rails. You might even be able to use the rake utility to keep track of a workflow like yours.
But now to what I think migh be your solution. PostgreSQL, and others to be sincere, have specific utilities to handle data and schemas like what you probably need.
The pg_dumpall command-line executable will dump the whole database into a file or the console, in a way that the psql utility can simply "reload" into the same, or into another (virgin) database.
So, if you want to keep only the current schema (no data!) of a running database cluster, you can, as the postgres-process owner user:
$ pg_dumpall --schema-only > schema.sql
Now the schema.sql will hold exactly the same users/databases/tables/triggers/etc, but not data. If you want to have a "full-backup" style dump (and that's one way to take a full backup of a database), just remove the "--schema-only" option from the command line.
You can reload the file into another (should be virgin, you might mess up a database with other data doing this):
$ psql -f schema.sql postgres
Now if you only want to dump one database, one table, etc. you should use the pg_dump utility.
$ pg_dump --schema-only <database> > database-schema.sql
And then, to reload the database into a running postgresql server:
$ psql <database> < database-schema.sql
As for version control, you can just keep the schema.sql file under it, and just dump the database again into the file before every vc commit. So at some particular version control state will you have the code and the working database schema that goes with it.
Oh, and all the tools I mentioned are free, and pg_dump and pg_dumpall come with the standard PostgreSQL installation.
Hope that helps,
Marco
You're not far off. I'm a Java developer, not a DBA, but building out the database as a project grows is an important task for the teams I've been on, and here's how I've seen it done best:
All DB changes are driven by DDL (SQL create, alter, or delete statements) plain text scripts. No changes through the DB client. Use a text editor that supports syntax highlighting like vim or notepad++, as the highlighting can help you find errors before you run the script.
Use a number at the beginning of each DDL script to define the order that scripts are run in. Base scripts have lower numbers. As the project grows, use alter new alter scripts to change the table, don't redefine the table in the initial script.
Use a script and the psql client to load the DDL scripts from lowest to highest. Here's the bash script we use. You can use it as a base for a .bat script on windows.
#!/bin/bash
export PGDATABASE=your_db export
export PGUSER=your_user export
export PGPASSWORD=your_password
for SQL_SCRIPT in $( find ./ -name "*.sql" -print | sort);
do
echo "**** $SQL_SCRIPT ****"
psql -q < $SQL_SCRIPT
done
As the project grows, use new alter scripts to change the table, don't redefine the table in the initial script.
All scripts are checked into source control. Each release is tagged so you can regenerate that version of the database in the future.
For unit testing and CI, most CI servers can run a script to drop and recreate a schema. One oft-cited framework for PostGresql unit testing is pgTAP
I'm a DBA and my workflow is almost equal the suggested by #Ireeder... but besides use a script shell to maintain the ddl scripts updated, I use a tool called dbmaintain DBMaintain
DbMaintain needs some configuration, but it is not a pain... It maintain control of which scripts were executed and in which order.
The principal benefit, is that if a script sql that were already executed changes, it complain by default, or execute just that script (if configured to do so)... The similar behavior works when you add a new script on the environment... it executes just that new script.
it's perfect to deploy and maintain development and production environments up to date... dont being necessary execute all scripts every time (like that shell suggested by Ireeder) or being necessary execute manually each new scripts.
If the changes are slotted you can create scripts that do the DDL changes and dump the expected database new state( version ).
pg_dump -f database-dump-production-yesterday.sql // all commands to create populate a startup
Today need to introduce a new table to new feature
psql -f change-production-for-today.sql // DDL and DML commands to make database reflect the new state
pg_dump --schema -f dump-production-today.sql // all new commands to create database for today app
psql -i sql-append-table-needed-data-into-dump.sql -f dump-production-today.sql
All developers should use the new database create script from now on development.
Red-Gate has very interesting in beta software (Sql Source Control) that is installing inside SSMS and can save schema iterations through commit button. I need the same feature for postgresql, the only way i found similar to this - is log_statement = ddl, but log need be transformed and saved properly to file and than commited. What is your opinion about postgresql iteration tools?
Not exactly like Red Gate's tool, but check out http://www.liquibase.org/. Might pique your interest considering your topic.
Maybe Post Facto is similar to what you want.
Another PostgreSQL Diff Tool (apgdiff) sounds like it works a lot like Red Gate's SQL Source Control and SQL Compare:
Another PostgreSQL Diff Tool (also known as apgdiff) is free
PostgreSQL diff tool that is useful for comparison/diffing of database
schemas. The tool compares two database dump files and creates output
with DDL statements that can be used to update old database schema to
new one or to see in what exactly both databases differ.
RedGate makes a tool for Microsoft SQL Server that allows you to snapshot the difference between two databases. It generates the scripts needed to update the database schema while preserving the data.
I need to find a tool like this for the Firebird database. We use Firebird in an embedded fashion, and would like to push out schema updates to remote machines with as little hassle as possible.
I don't know of a tool for Firebird that does exactly the same.
However, FlameRobin allows you to extract the metadata for single database objects or the complete database. It can also create scripts to recreate a certain database object including its dependencies. So you could either diff two database creation scripts and save the differences as the starting point (which may still need some changes), or you could use the recreation scripts for a single object and its dependencies.
This list contains a couple of comparison tools
As #devio suggsted, I took a look at the large list of administration tools listed on the IBPhoenix site. Of the tools on the list, the only two that generate scripts to migrate schema and data changes are XCase and Database Workbench.
Does anyone have experience with these tools? Are there others that I may have missed?
Embarcadero Change Manager will add support for InterBase and Firebird in the fall. Read all about it here. Change Manager includes schema archive compare and synchronizations, data compare, sync, and masking, and configuration management.
see IBExpert, it have a command line tool too where you can run scripts in a propietary language. You can compare two db and get the script to update the target db, it does a great job with dependecies, like views, it drops every dependency where the view is used, alter the view and then recreate the dropped objects. This can be done in GUI too, and a lot of other nice things
Migration tools for Firebird on IBPhoenix site are on a separate link Contributed Downloads - Migration Tools
Try SchemaCrawler link
SchemaCrawler is an open-source Java
API that makes working with database
metadata as easy as working with plain
old Java objects.
SchemaCrawler is also a command-line
tool to output your database schema
and data in a readable form. The
output is designed to be diff-ed with
previous versions of your database
schema.
As it requires a JDBC driver, you would also need the following: Firebird JDBC Driver