I have searched for this answer on many websites,and I am getting almost same answer everywhere.
Liquibase helps ensure software releases are always delivered with
the matching state of the database so your features are deployed
quickly and confidently.
We can specify changes once and test them on different databases.
We could pick and choose the changes to be applied in different environments(dev, staging, QA) if we have such a requirement.
This can be done by using Jpa too (using different properties file for staging, dev, production). Then when we should use liquibase, and when it is required?
Liquibase & JPA are very different & have completely separate use cases.
Liquibase is a library for tracking, managing, and applying database schema changes automatically. Liquibase helps track all of your database schema changes in your own files (sql, xml, yml, json). You can think of liquibase as a versioning-type tool for database scripts.
Say somebody inserts new data into a table but, only did it for one environment when it should have been done for all of them. Now an even worse case scenario is that person leaves their company & now no one knows they are missing data. This is a perfect example of how you lose consistency overtime due to careless errors when managing multiple databases or multiple schemas. With liquibase they wouldn't have to worry because that database change is now properly saved & will execute for each environment that that code is deployed to. Essentially, your liquibase scripts are now somewhat part of your project's code (depending on where they live of course).
On the other hand, JPA can validate your existing codebase against your database but, it won't make the necessary DB changes for you (unless you specify something like create-drop which I don't recommend for most use cases). Whenever you use any of their annotations such as #Entity, #Table, #Id, #Blob, etc., none of those changes are automatically applied to your database. JPA is great for modeling how your tables, relationships, & constraints should look in your database. However, it is usually up to you to execute the necessary SQL (or other query language) for such changes. That's where liquibase can conveniently handle it for you.
Related
We have a large enterprise system which has many databases on a single server (Sybase)
Developers will make a change in one db, script it, then maybe make a change in another db, add that to the list of scripts and so on.
Our release then runs through these scripts making changes to the objects in different databases in the same order.
Reading the Liquibase documentation, it seems like it would work if you applied all the changes to one db, then another, then another. Which wouldnt really work in our case as a change in one db may rely a change done earlier on another db and vice versa.
How could I use Liquibase to do the same?
You probably need to start looking at Datical DB (disclaimer: I work at Datical), which is a set of tools and extensions around Liquibase to handle these kinds of situations.
Alternatively, you could do something similar, writing your own tools to control Liquibase. Liquibase is controllable at several different levels - you could use a scripting tool to execute the Liquibase command line, or you could use Java or Groovy (or any other language that integrates with tools in the JVM) and use the Liquibase classes more directly.
Liquibase does not currently support connecting to multiple different databases. It is being considered for version 4.0.
If your databases are the same database instance but different schemas, you can use the schemaName attribute to target changeSets at different schemas from one changeSet to the next. You will need a single connection URL that has access to all of the schemas.
If your databases are different instances or not all accessible from a single connection URL, you can probably create custom change classes or extensions that allow you to run SQL against different connections, although that will not be as clean or easy as the schemaName option.
In the scope of responsible programming and versioning, I would like to start to version my database changes especially since I am developing on my database instance then moving it to production. I haven't found any thing that truly makes sense to me on how to do this. I am using Visual Studio 2010 Pro as my IDE. Is there a document that makes this process simple and able to detect changes to the database with relative ease? Or what should I change in my workflow to make this easier?
One way that I've successfully done this sort of thing in the past, is via Sql Source Control. Visual Studio does not offer this functionality for you.
Alternatively, you can use SSMS to generate the Database scripts for you and save it as a file; then you can check in the script. You would chose whether you generate the whole DB script in one file or whether you do it on an object by object basis. The syncing part will have to be done by you by executing your scripts in production. In conclusion a total nightmare.
Redgate also offers Sql Compare, which is great for syncing databases. Take a look at their products if you or your company can afford them.
We use our own DB solution in-house which brings all the tools required for proper DB versioning. While I realize that it may not be a perfect solution for everyone, I invite you to have a look at it (it is open-source): bsn ModuleStore
The versioning aspect is as follows: the tool can script out the SQL semi-automatically, and it does reformat the source code to be in an uniform format. The files will therefore always be identical for the same source, no matter of when and by whom something has been scripted; this therefore works nicely with non-locking source control systems (especially SVN, Git or Mercurial).
The reformat puts all statements in the same form (e.g. optional keywords such as AS, INNER, OUTER etc. are dealt with), scripts everything to the "dbo" schema (even if it was in a different one), puts all identifiers into the square braces ([something]), uppercases all reserved words, does the indentation etc.
Besides versioning, the runtime part of the tool can diff the running DB and the CREATE scripts (DB source code) and apply updates automatically for all non-destructive changes (e.g. updating indexes, constraints, views, stored procedures, triggers, custom types, new tables etc.). Destructuve changes have to be scriped manually (table changes which then usually require data transformations). The runtime will make sure that all updates are performed in a transaction and rollback if the resulting DB doesn't match the CREATE scripts, therefore you get the safety of knowing that the DB is exactly on the version required by the application, even if it has been tampered with manually.
Also, multiple "modules" can be used in a single database. Each module is stored as a schema and independent of other schemas, thereby making it possible to add or remove modules from one single DB, and avoiding the need to create multiple databases for different parts of the application. Also, the use of schemas to do this makes sure that there are no name collisions.
It may be worth noting that the toolset has no dependency to the SMO, it is autonomous.
Save Your Database scripts at SVN. Here is the Refernce How to use SVN Tortoise
OR
Save your database script at VSS. Here is the reference What is VSS ? How can we use that ?
In both cases you can keep track of the changes done so that in future you can check the history which in saved in the form of versions.
You can use Red Gate product also
EDIT
How do you pull out what what has changed?
Use comparison feature to check the changes made in the previous versions.
How do I apply the changes to the live database server?
Download the latest file from server.
I hope you are not using the Drop statements for the Table in your consolidated script. As it will delete all records from the table.
Drop statements will take place for Stored Pro, View, Function etc.
Please note that you have to run the complete latest database script file on the production server with below mentioned action plans
1. Remove Drop Statement for Schema DDL
2. Add Drop/Create Statements for Stored Proc/Views
3. Include Alter statements DML of schema.
Hope this will definitely help you.
I am using github for maintaining versions and code synchronization.
We are team of two and we are located at different places.
How can we make sure that our databases are synchronized.
Update:--
I am rails developer. But these days i m working on drupal projects (where database is the center of variations). So i want to make sure that team must have a synchronized database. Also the values in various tables.
I need something which keep our data values synchronized.
Centralized database is a good solution. But things get disturbed when someone works offline
if you use visual studio then you can script your database tables, views, stored procedures and functions as .sql files from a database solution and then check those into version control as well - its what i currently do at my workplace
In you dont use visual studio then you can still script your sql as .sql files [but with more work] and then version control them as necessary
Have a look at Red Gate SQL Source Control - http://www.red-gate.com/products/SQL_Source_Control/
To be honest I've never used it, but their other software is fantastic. And if all you want to do is keep the DB schema in sync (rather than full source control) then I have used their SQL Compare product very succesfully in the past.
(ps. I don't work for them!)
You can use Sql Source Control together with Sql Data Compare to source control both: schema and data. Here is an article from redgate: Source controlling data.
These are some of the possibilities.
Using the same database. Set-up a central database where everybody can connect to. This way you are sure everybody uses the same database all the time.
After every change, export the database and commit it to the VCS. This option requires discipline and manual labor.
Use some kind of other definition of the schema. For example, Doctrine for php has the ability to build the database from a yaml definition which can be stored in the vcs. This can be easier automated then point 2.
Use some other software/script which updates the database.
I feel your pain. I had terrible trouble getting SQL Server to play nice with SVN. In the end I opted for a shared database solution. Every day I run an extensive script to backup all our schema definitions (specifically stored procedures) for version control into text files. Due to the limited number of changes this works well.
I now use this technique for our major project and personal projects too. The only negative is that it relies on being connected all the time. The other answers suggest that full database versioning is very time consuming and I tend to agree. For "live" upgrades we use the Red Gate tools, they do both schema and data compare and it works very well.
http://www.red-gate.com/products/SQL_Data_Compare/. We were using this tool for keeping databases in sync in our company. Later we had some specific demands so we had to write our own code for synchronization. Depends how complex is you database and how much changes is happening. It is much simpler if you have time when no one is working and you can lock database for syncronization.
Check out OffScale DataGrove.
This product tracks changes to the entire DB - schema and data. You can tag versions in any point in time, and return to older states of the DB with a simple command. It also allows you to create virtual, separate, copies of the same database so each team member can have his own separate DB. All the virtual copies are tracked into the same repository so it's super-easy to revert your DB to someone else's version (you simply check-out their version, just like you do with your source control). This means all your DBs can always be synchronized.
Regarding a centralized DB - just like you don't want to work on the same source code, you don't want to be working on the same DB. It means you'll constantly break each other's code and builds each time someone changes something in the DB.
I suggest that you go with a separate DB for each developer, and sync them using DataGrove.
Disclaimer - I work at OffScale :-)
Try Wizardby. This is my personal project, but I've used it in my several previous jobs with great deal of success.
Basically, it's a tool which lets you specify all changes to your database schema in a database-independent manner and then apply these changes to all your databases.
We use SQL Server 2000/2005 and Vault or SVN on most of our projects. I haven't found a decent solution for capturing database schema/proc changes in either source control system.
Our current solution is quite cumbersome and difficult to enforce (script out the object you change and commit it to the database).
We have a lot of ideas of how to tackle this problem with some custom development, but I'd rather install an existing tool (paid tools are fine).
So: how do you track your database code changes? Do you have any recommended tools?
Edit:
Thanks for all the suggestions. Due to time constraints, I'd rather not roll my own here. And most of the suggestions have the flaw that they require the dev to follow some procedure.
Instead, an ideal solution would monitor the SQL Database for changes and commit any detected changes to SCM. For example, if SQL Server had an add-on that could record any DML change with the user that made the change, then commit the script of that object to SCM, I'd be thrilled.
We talked internally about two systems:
1. In SQL 2005, use object permissions to restrict you from altering an object until you did a "checkout". Then, the checkin procedure would script it into the SCM.
2. Run a scheduled job to detect any changes and commit them (anonymously) to SCM.
It'd be nice if I could skip the user-action part and have the system handle all this automatically.
Use Visual studio database edition to script out your database. Works like a charm and you can use any Source control system, of course best if it has VS plugins. This tool has also a number of other useful features. Check them out here in this great blog post
http://www.vitalygorn.com/blog/post/2008/01/Handling-Database-easily-with-Visual-Studio-2008.aspx
or check out MSDN for the official documentation
Tracking database changes directly from SSMS is possible using various 3rd party tools. ApexSQL Source Control automatically scripts any database object that is included in versioning. Commits cannot be automatically performed by the tool. Instead, the user needs to choose which changes will be committed.
When getting changes from a repository, ApexSQL Source Control is aware of a SQL database referential integrity. Thus, it will create a synchronization scripts including all dependent objects that will be wrapped in a transactions so, either all changes will be applied in case no error is encountered, or none of the selected changes is applied. In any case, database integrity remains unaffected.
I have to say I think a visual studio database project is also a reasonable solution to the source control dilemma. If it's set up correctly you can run the scripts against the database from the IDE. If your script is old, get the latest, run it against the DB. Have a script that recreates all the objects as well if you need, new objects must be added to the this script as well by hand, but only once
I like every table, proc and function to be in it's own file.
One poor man's solution would be to add a pre-commit hook script that dumps out the latest db schema into a file and have that file committed to your SVN repository along with your code. Then, you can diff the db schema files from any revision.
I just commit the SQL-alter-Statement additional to the complete SQL-CreateDB-statement.
Rolling your own from scratch would not be very doable, but if you use a sql comparison tool like Redgate SQL Compare SDK to generate your change files for you it would not take very long to half-roll what you want and then just check those files into source control. I rolled something similar for myself to update changes from our development systems to our live systems in just a few hours.
In our environment, we never change the DB manually: all changes are done by scripts at release time, and the scripts are kept in the version control system. One important part of this procedure is to be sure that all scripts can be run again against the same DB the scripts are idempotent?) without loss of data. For example, if you add a column, make sure that you do nothing if the column is already there.
Your comment about "suggestions have the flaw that they require the dev to follow some procedure" is really a tell-tale. It's not a flaw, it's a feature. Version control helps developers in following procedures and makes the procedures less painful. If you don't want to follow procedures, you don't need version control.
In SQL2000 generate each object into it's own file, then check them all into your source control. Let your source control handle the change history.
In SQL 2005, you'll need to write a bit of code to generate all objects into separate files.
In one project I arranged by careful attention in the design that all the important data in the database can be automatically recreated from external places. At startup the application creates the database if it is missing, and populates it from external data sources, using a schema in the application source code (and hence versioned with the application). The database store name (a sqlite filename although most database managers allow multiple databases) includes a schema version, and we increase the schema version whenever we commit a schema change. This means when we restart the application to a new version with a different schema that a new database store is automatically created and populated. Should we have to revert a deployment to an old schema then the new run of the old version will be using the old database store, so we get to do fast downgrades in the event of trouble.
Essentially, the database acts like a traditional application heap, with the advantages of persistence, transaction safety, static typing (handy since we use Python) and uniqueness constraints. However, we don't worry at all about deleting the database and starting over, and people know that if they try some manual hack in the database then it will get reverted on the next deployment, much like hacks of a process state will get reverted on the next restart.
We don't need any migration scripts since we just switch database filename and restart the application and it rebuilds itself. It helps that the application instances are sharded to use one database per client. It also reduces the need for database backups.
This approach won't work if your database build from the external sources takes longer than you will allow the application to be remain down.
If you are using .Net and like the approach Rails takes with Migrations, then I would recommend Migrator.Net.
I found a nice tutorial that walks through setting it up in Visual Studio. He also provides a sample project to reference.
We developed a custom tool that updates our databases. The database schema is stored in a database-neutral XML file which is then read and processed by the tool. The schema gets stored in SVN, and we add appropriate commentary to show what was changed. It works pretty well for us.
While this kind of solution is definitely overkill for most projects, it certainly makes life easier at times.
Our dbas periodically check prod against what is in SVN and delete any objects not under source control. It only takes once before the devlopers never forget to put something in source control again.
We also do not allow anyone to move objects to prod without a script as our devs do not have prod rights this is easy to enforce.
In order to track all the change like insert update and delete there will be a lot of overhead for the SVN.
It is better to track only the ddl changes like (alter, drop, create) which changes the schema.
You can do this Schema tracking easily by creating a table and a trgger to insert data to that table.
Any time you want u can get the change status by querying from that table
There are a lots of example here and here
As you develop an application database changes inevitably pop up. The trick I find is keeping your database build in step with your code. In the past I have added a build step that executed SQL scripts against the target database but that is dangerous in so much as you could inadvertanly add bogus data or worse.
My question is what are the tips and tricks to keep the database in step with the code? What about when you roll back the code? Branching?
Version numbers embedded in the database are helpful. You have two choices, embedding values into a table (allows versioning multiple items) that can be queried, or having an explictly named object (such as a table or somesuch) you can test for.
When you release to production, do you have a rollback plan in the event of unexpected catastrophe? If you do, is it the application of a schema rollback script? Use your rollback script to rollback the database to a previous code version.
You should be able to create your database from scratch into a known state.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
I've definitely been interested in some sort of DB versioning system, but I haven't found anything yet. So, instead of a solution, you'll get my vote. :-P
You really do want to be able to take a clean machine, get the latest version from source control, build in one step, and run all tests in one step. Making this fast makes you produce good software faster.
Just like external libraries, database configuration must also be in source control.
Note that I'm not saying that all your live database content should be in the same source control, just enough to get to a clean state. (Do back up your database content, though!)
Define your schema objects and your reference data in version-controlled text files. For example, you can define the schema in Torque format, and the data in DBUnit format (both use XML). You can then use tools (we wrote our own) to generate the DDL and DML that take you from one version of your app to another. Our tool can take as input either (a) the previous version's schema & data XML files or (b) an existing database, so you are always able to get a database of any state into the correct state.
I like the way that Django does it. You build models and the when you run a syncdb it applies the models that you have created. If you add a model you just need to run syncdb again. This would be easy to have your build script do every time you made a push.
The problem comes when you need to alter a table that is already made. I do not think that syncdb handles that. That would require you to go in and manually add the table and also add a property to the model. You would probably want to version that alter statement. The models would always be under version control though, so if you needed to you could get a db schema up and running on a new box without running the sql scripts. Another problem with this is keeping track of static data that you always want in the db.
Rails migration scripts are pretty nice too.
A DB versioning system would be great, but I don't really know of such a thing.
While being able to do so is helpful (especially in the early stages of a new project), many (most?) databases will quickly become far too large for that to be possible. Also, if you have any BLOBs then you're going to have problems generating SQL scripts for your entire database.
Backups and compression can help you there. Sorry - there's no excuse not to be able to get a a good set of data to develop against. Even if it's just a sub-set.
Put your database developments under version control. I recommend to have a look at neXtep designer :
http://www.nextep-softwares.com/wiki
It is a free GPL product which offers a brand new approach to database development and deployment by connecting version information with a SQL generation engine which could automatically compute any upgrade script you need to upgrade any version of your database into another. Any existing database could be version controlled by a reverse synchronization.
It currently supports Oracle, MySql and PostgreSql. DB2 support is under development. It is a full-featured database development environment where you always work on version-controlled elements from a repository. You can publish your updates by simple synchronization during development and you can generate exportable database deliveries which you will be able to execute on any targetted database through a standalone installer which validates the versions, performs structural checks and applies the upgrade scripts.
The IDE also offers you SQL editors, dependency management, support for modular database model components, data model diagrams, SQL clients and much more.
All the documentation and concepts could be found in the wiki.