I have strange problem.
I tried to move database from one server to another using pgAdmin III.
Database was created on server with PostgreSQL 8.4.9 and I wanted to move it on second server with PostgreSQL 8.2.11.
To do It, I used "backup" option and saved file, after that I used "restore" option on new database. Tables are loaded but there aren't any functions in new database.
Maybe it is because of different postgreSQL versions?
Does anyone know the reason? Any solution?
If the functions aren't around, double-check that plpgsql is available as a language. It's available by default nowadays, but making it available used to require a create language statement.
That said, I'd echo the comments: you really should be upgrading to a 9.x Postgres version that is still supported, rather than downgrading from an unsupported version to one that is even older.
I'd recommend to do it via pg_dump from an interactive session and export the complete database to one ore more sql files. There you can use the -s switch to have only the schema which should include created functions. Having this SQL file, you can also easier backport your changes or debug if something not applying to the old fallow.
Related
well for example you have build a program, for restaurant, for a cinema, wherever,
now how do you do when, you install your application, the database was installed correctly too? i dont sure but i believe this is a different database? for example a file?
(talking about sql).
and how different are going to be the queries? cuz i believe i am not going to have the same function on sql server than a file database
and what connection i shall use?
could i use entity framework?
and how capacity could to have the different file for databases?
regards
You can use a file-based database like SQLite that supports SQL queries. There are ADO adapters available as well. The link should take care of the rest of your questions as well.
Well, since you usually have absolutely no knowledge about target environment, user must configure program to his envronment at install time, or later (af first launch for example, this is much simplier than implement same functionality in installer). User specifies SQL server address (if we are talking about server-based systems) and database name he wants to use. Then database is created programmaticaly using that information.
I have a web application on java, which is working with database. I need an ant’s script that will deploy or update my application to latest version. There is no problem with application part, but I don't know how to do database update.
I have an idea to build-in some meta-information (number of version) to the names of sql scripts.
For example:
DB_1.0.0.sql
DB_1.0.1.sql
DB_1.2.0.sql
DB_2.0.0.sql
DB_2.1.0.sql
My script detected, that current version was 1.0.1, so I need to execute DB_1.2.0.sql, DB_2.0.0.sql, DB_2.1.0.sql files by SQL task. Problem is: how to find files with ant, that I need to execute.
Maybe it is not the best way to update database. Do you have any other idea?
Flyway works as you've described. It keeps a record of the SQL files already applied to the database, enabling an automatic upgrade. Simple and straight forward to use.
A more powerful solution, IMHO, is liquibase. It has an XML syntax to record database changes, enabling the generation of cross-platform SQL. It also has some powerful features such as the ability to roll-back changes and perform diff's between databases.
It looks like your filenames follow a strict convetion. In that case you can find files by matching a pattern filelist and execute using sql.
You can use LiquiBase to write some tasks that can help in database schema updates-
I got a .db database file which one of my friend created through PowerBuilder 6 in Win98. Later I wanted to test that database file, but was not able to view or open it in any of common db viewer and not able to get any data out of it.
please help..
I am using Win7 and do have xp(virtual).
The problem with your description is that PowerBuilder is database-agnostic, so it could be any type of database if it was being used with a PowerBuilder application. However, if you want to go with probabilities (and I'm not sure this is how PB is used most; at one point the most popular database used by PowerBuilder was Oracle), PowerBuilder shipped with a run time license for SQL Anywhere, a database that was originally Watcom, acquired by Powersoft, which was acquired by Sybase, which was acquired by SAP.
Supposing the database you have in hand is SQL Anywhere, you need to get a SQL Anywhere engine. Probably the first thing I'd try is downloading the Developer version of SQL Anywhere and just try to open up a copy with that, see if the software will migrate it to the current version. (My bet is that it will, or will at least provide you with a means.) Another way to get a current version of SQL Anywhere (I think; I haven't tried this in ages) is to download a trial version of PowerBuilder 12.5, which I think comes with SQL Anywhere (the paid version does). If you get that up and running, then you can use a pipeline object in PowerBuilder to pretty easily move data from one database to another. And, for kicks, you can migrate up your PB6 app to see if it still runs. (My bet is that it will take a few tweaks, but fewer than you're probably imagining.)
Good luck,
Terry.
Your .db file is probably a Sybase SQL-Anywhere database. You need to know which version of the engine was used to create the database and then you need the ODBC driver to access that database.
RedGate makes a tool for Microsoft SQL Server that allows you to snapshot the difference between two databases. It generates the scripts needed to update the database schema while preserving the data.
I need to find a tool like this for the Firebird database. We use Firebird in an embedded fashion, and would like to push out schema updates to remote machines with as little hassle as possible.
I don't know of a tool for Firebird that does exactly the same.
However, FlameRobin allows you to extract the metadata for single database objects or the complete database. It can also create scripts to recreate a certain database object including its dependencies. So you could either diff two database creation scripts and save the differences as the starting point (which may still need some changes), or you could use the recreation scripts for a single object and its dependencies.
This list contains a couple of comparison tools
As #devio suggsted, I took a look at the large list of administration tools listed on the IBPhoenix site. Of the tools on the list, the only two that generate scripts to migrate schema and data changes are XCase and Database Workbench.
Does anyone have experience with these tools? Are there others that I may have missed?
Embarcadero Change Manager will add support for InterBase and Firebird in the fall. Read all about it here. Change Manager includes schema archive compare and synchronizations, data compare, sync, and masking, and configuration management.
see IBExpert, it have a command line tool too where you can run scripts in a propietary language. You can compare two db and get the script to update the target db, it does a great job with dependecies, like views, it drops every dependency where the view is used, alter the view and then recreate the dropped objects. This can be done in GUI too, and a lot of other nice things
Migration tools for Firebird on IBPhoenix site are on a separate link Contributed Downloads - Migration Tools
Try SchemaCrawler link
SchemaCrawler is an open-source Java
API that makes working with database
metadata as easy as working with plain
old Java objects.
SchemaCrawler is also a command-line
tool to output your database schema
and data in a readable form. The
output is designed to be diff-ed with
previous versions of your database
schema.
As it requires a JDBC driver, you would also need the following: Firebird JDBC Driver
I have read lots of posts about the importance of database version control. However, I could not find a simple solution how to check if database is in state that it should be.
For example, I have a databases with a table called "Version" (version number is being stored there). But database can be accessed and edited by developers without changing version number. If for example developer updates stored procedure and does not update Version database state is not in sync with version value.
How to track those changes? I do not need to track what is changed but only need to check if database tables, views, procedures, etc. are in sync with database version that is saved in Version table.
Why I need this? When doing deployment I need to check that database is "correct". Also, not all tables or other database objects should be tracked. Is it possible to check without using triggers? Is it possible to be done without 3rd party tools? Do databases have checksums?
Lets say that we use SQL Server 2005.
Edited:
I think I should provide a bit more information about our current environment - we have a "baseline" with all scripts needed to create base version (includes data objects and "metadata" for our app). However, there are many installations of this "base" version with some additional database objects (additional tables, views, procedures, etc.). When we make some change in "base" version we also have to update some installations (not all) - at that time we have to check that "base" is in correct state.
Thanks
You seem to be breaking the first and second rule of "Three rules for database work". Using one database per developer and a single authoritative source for your schema would already help a lot. Then, I'm not sure that you have a Baseline for your database and, even more important, that you are using change scripts. Finally, you might find some other answers in Views, Stored Procedures and the Like and in Branching and Merging.
Actually, all these links are mentioned in this great article from Jeff Atwood: Get Your Database Under Version Control. A must read IMHO.
We use DBGhost to version control the database. The scripts to create the current database are stored in TFS (along with the source code) and then DBGhost is used to generate a delta script to upgrade an environment to the current version. DBGhost can also create delta scripts for any static/reference/code data.
It requires a mind shift from the traditional method but is a fantastic solution which I cannot recommend enough. Whilst it is a 3rd party product it fits seamlessly into our automated build and deployment process.
I'm using a simple VBScript file based on this codeproject article to generate drop/create scripts for all database objects. I then put these scripts under version control.
So to check whether a database is up-to-date or has changes which were not yet put into version control, I do this:
get the latest version of the drop/create scripts from version control (subversion in our case)
execute the SqlExtract script for the database to be checked, overwriting the scripts from version control
now I can check with my subversion client (TortoiseSVN) which files don't match with the version under version control
now either update the database or put the modified scripts under version control
You have to restrict access to all databases and only give developers access to a local database (where they develop) and to the dev server where they can do integration. The best thing would be for them to only have access to their dev area locally and perform integration tasks with an automated build. You can use tools like redgates sql compare to do diffs on databases. I suggest that you keep all of your changes under source control (.sql files) so that you will have a running history of who did what when and so that you can revert db changes when needed.
I also like to be able to have the devs run a local build script to re initiate their local dev box. This way they can always roll back. More importantly they can create integration tests that tests the plumbing of their app (repository and data access) and logic stashed away in a stored procedure in an automated way. Initialization is ran (resetting db), integration tests are ran (creating fluff in the db), reinitialization to put db back to clean state, etc.
If you are an SVN/nant style user (or similar) with a single branch concept in your repository then you can read my articles on this topic over at DotNetSlackers: http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Build-automation-with-NAnt.aspx and http://dotnetslackers.com/articles/aspnet/Building-a-StackOverflow-inspired-Knowledge-Exchange-Continuous-integration-with-CruiseControl-NET.aspx.
If you are a perforce multi branch sort of build master then you will have to wait till I write something about that sort of automation and configuration management.
UPDATE
#Sazug: "Yep, we use some sort of multi branch builds when we use base script + additional scripts :) Any basic tips for that sort of automation without full article?" There are most commonly two forms of databases:
you control the db in a new non-production type environment (active dev only)
a production environment where you have live data accumulating as you develop
The first set up is much easier and can be fully automated from dev to prod and to include rolling back prod if need be. For this you simply need a scripts folder where every modification to your database can be maintained in a .sql file. I don't suggest that you keep a tablename.sql file and then version it like you would a .cs file where updates to that sql artifact is actually modified in the same file over time. Given that sql objects are so heavily dependent on each other. When you build up your database from scratch your scripts may encounter a breaking change. For this reason I suggest that you keep a separate and new file for each modification with a sequence number at the front of the file name. For example something like 000024-ModifiedAccountsTable.sql. Then you can use a custom task or something out of NAntContrib or an direct execution of one of the many ??SQL.exe command line tools to run all of your scripts against an empty database from 000001-fileName.sql through to the last file in the updateScripts folder. All of these scripts are then checked in to your version control. And since you always start from a clean db you can always roll back if someones new sql breaks the build.
In the second environment automation is not always the best route given that you might impact production. If you are actively developing against/for a production environment then you really need a multi-branch/environment so that you can test your automation way before you actually push against a prod environment. You can use the same concepts as stated above. However, you can't really start from scratch on a prod db and rolling back is more difficult. For this reason I suggest using RedGate SQL Compare of similar in your build process. The .sql scripts are checked in for updating purposes but you need to automate a diff between your staging db and prod db prior to running the updates. You can then attempt to sync changes and roll back prod if problems occur. Also, some form of a back up should be taken prior to an automated push of sql changes. Be careful when doing anything without a watchful human eye in production! If you do true continuous integration in all of your dev/qual/staging/performance environments and then have a few manual steps when pushing to production...that really isn't that bad!
First point: it's hard to keep things in order without "regulations".
Or for your example - developers changing anything without a notice will bring you to serious problems.
Anyhow - you say "without using triggers".
Any specific reason for this?
If not - check out DDL Triggers. Such triggers are the easiest way to check if something happened.
And you can even log WHAT was going on.
Hopefully someone has a better solution than this, but I do this using a couple methods:
Have a "trunk" database, which is the current development version. All work is done here as it is being prepared to be included in a release.
Every time a release is done:
The last release's "clean" database is copied to the new one, eg, "DB_1.0.4_clean"
SQL-Compare is used to copy the changes from trunk to the 1.0.4_clean - this also allows checking exactly what gets included.
SQL Compare is used again to find the differences between the previous and new releases (changes from DB_1.0.4_clean to DB_1.0.3_clean), which creates a change script "1.0.3 to 1.0.4.sql".
We are still building the tool to automate this part, but the goal is that there is a table to track every version the database has been at, and if the change script was applied. The upgrade tool looks for the latest entry, then applies each upgrade script one-by-one and finally the DB is at the latest version.
I don't have this problem, but it would be trivial to protect the _clean databases from modification by other team members. Additionally, because I use SQL Compare after the fact to generate the change scripts, there is no need for developers to keep track of them as they go.
We actually did this for a while, and it was a HUGE pain. It was easy to forget, and at the same time, there were changes being done that didn't necessarily make it - so the full upgrade script created using the individually-created change scripts would sometimes add a field, then remove it, all in one release. This can obviously be pretty painful if there are index changes, etc.
The nice thing about SQL compare is the script it generates is in a transaction -and it if fails, it rolls the whole thing back. So if the production DB has been modified in some way, the upgrade will fail, and then the deployment team can actually use SQL Compare on the production DB against the _clean db, and manually fix the changes. We've only had to do this once or twice (damn customers).
The .SQL change scripts (generated by SQL Compare) get stored in our version control system (subversion).
If you have Visual Studio (specifically the Database edition), there is a Database Project that you can create and point it to a SQL Server database. The project will load the schema and basically offer you a lot of other features. It behaves just like a code project. It also offers you the advantage to script the entire table and contents so you can keep it under Subversion.
When you build the project, it validates that the database has integrity. It's quite smart.
On one of our projects we had stored database version inside database.
Each change to database structure was scripted into separate sql file which incremented database version besides all other changes. This was done by developer who changed db structure.
Deployment script checked against current db version and latest changes script and applied these sql scripts if necessary.
Firstly, your production database should either not be accessible to developers, or the developers (and everyone else) should be under strict instructions that no changes of any kind are made to production systems outside of a change-control system.
Change-control is vital in any system that you expect to work (Where there is >1 engineer involved in the entire system).
Each developer should have their own test system; if they want to make changes to that, they can, but system tesing should be done on a more controlled, system test system which has the same changes applied as production - if you don't do this, you can't rely on releases working because they're being tested in an incompatible environment.
When a change is made, the appropriate scripts should be created and tested to ensure that they apply cleanly on top of the current version, and that the rollback works*
*you are writing rollback scripts, right?
I agree with other posts that developers should not have permissions to change the production database. Either the developers should be sharing a common development database (and risk treading on each others' toes) or they should have their own individual databases. In the former case you can use a tool like SQL Compare to deploy to production. In the latter case, you need to periodically sync up the developer databases during the development lifecycle before promoting to production.
Here at Red Gate we are shortly going to release a new tool, SQL Source Control, designed to make this process a lot easier. We will integrate into SSMS and enable the adding and retrieving objects to and from source control at the click of a button. If you're interested in finding out more or signing up to our Early Access Program, please visit this page:
http://www.red-gate.com/Products/SQL_Source_Control/index.htm
I have to agree with the rest of the post. Database access restrictions would solve the issue on production. Then using a versioning tool like DBGhost or DVC would help you and the rest of the team to maintain the database versioning