Interbase to Firebird Migration - database

Besides doing a data pump. Is there any other solutions for migrating?
Can you take a GBK and restore it to firebird? Is there any other migration issues you may have run into?

Besides doing a data pump. Is there any other solutions for migrating?
no, this is the only solution
Can you take a GBK and restore it to firebird?
no, it is not compatibile backups files
Is there any other migration issues you may have run into?
you can run into many issues and as #Mark Rotteveel say it is to board. You can talk about specific issue you have.
I can point you to few issues:
Ambiguous field name between tables - as Interbase allow you to do select from two tables with same field names and put this names in the where clause without aliasing it
field not contained in the aggregate - as Interbase buggly check fields when you do group by
order by in aggregate like select count(*) from table_name ORDER BY some field Interbase allow this Firebird not
Count(*) return Int64 in Firebird in Interbase it is Integer
identifiers longer then 31 chars are not allowed in current Firebird Interbase allows it but not handle it as it understand only first 31 chars
if you use Delphi and IBX - you can not use Boolean fields in Firebird as IBX handling is not compatibile with Firebird

Related

Sybase query to compare two table definition in same database

We have some sets of the tables in same database like table1 and table_copy. Now we are planning to migrate the old data from table1 to table_copy which is currently in use. But before that we have to compare the definitions of the tables so that the data import will be hassel free. Can we compare the table definitions with a sybase query.
I searched over net by all I get approaches to compare the data in two tables. but we intend to compare the definitions only.
You could do queries on sysobjects, syscolumns and systypes.
Or you could compare with diff (or perl or whatever) the outputs of sp_help.
However isn't this really a development and testing problem? You should perhaps copy the database into a pre-production database and test your scripts - repeat until perfect.
If you can only do the full migration on the Production database for some resources reason (time, money, servers,) then you need full dumps before starting.
Isn't the DDL for these two tables saved and accurate in a Version Control system somewhere? Perhaps they're from a 3rd party system though, so you don't have that.

Is there a good way to verify if a database schema is correct after an upgrade or migration?

We have customers who are upgrading from one database version to another (Oracle 9i to Oracle 10g or 11g to be specific). In one case, a customer exported the old database and imported it into the new one, but for some reason the indexes and constraints didn't get created. They may have done this on purpose to speed up the import process, but we're still looking into the reason why.
The real question is, is there a simple way that we can verify that the structure of the database is complete after the import? Is there some sort of checksum that we can do on the structure? We realize that we could do a bunch of queries to see if all the tables, indexes, aliases, views, sequences, etc. exist, but this would probably be difficult to write and maintain.
Update
Thanks for the answers suggesting commercial and/or GUI tools to use, but we really need something free that we could package with our product. It also has to be command line or script driven so our customers can run it in any environment (unix, linux, windows).
Presuming a single schema, something like this - dump USER_OBJECTS into a table before migration.
CREATE TABLE SAVED_USER_OBJECTS AS SELECT * FROM USER_OBJECTS
Then to validate after your migration
SELECT object_type, object_name FROM SAVED_USER_OBJECTS
MINUS
SELECT object_type, object_name FROM USER_OBJECTS
One issue is if you have intentionally dropped objects between versions you will also need to delete the from SAVED_USER_OBJECTS. Also this will not pick up if the wrong version of objects exist.
If you have multiple schemas, then the same thing is required for each schema OR use ALL_OBJECTS and extract/compare for the relevant user schemas.
You could also do a hash/checksum on object_type||object_name for the whole schema (save before/compare after) but the cost of calculation wouldn't be that different from comparing the two tables on indexes.
If you are willing to spend some, DBDiff is an efficient utility that does exactly what you need.
http://www.dkgas.com/oradbdiff.htm
In SQL DEVELOPER (the free Oracle utility) there is a Database Schema Differences feature.
It's worth to try it.
Hope it helps.
SQL Developer - download
Roni.
I wouldn't write the check script, I'd write a program to generate the check script from a particular version of the database. Just go though the metatdata and record what's there and write it to a file, then compare the values in that file against the values in the customer's database. This won't work so well if you use system-generated names for your constraints, but it is probably enough to just verify that things are there. Dropping indexes and constraints is pretty common when migrating a database, so you might not even need to check too much; if two or three things are missing, then it's not unreasonable to assume they all are. You might also want to write a script that drops all the constraints and indexes and re-creates them, and just have your customers run that as a post-migration step. Just be sure you drop everything by name, so you don't delete any custom indexes your customer might have created.

How to convert a SQL Server database (including procedures, functions and triggers) to Firebird

I am considering migrating to Firebird. To have a "quick start" approach I downloaded the trial of a conversion tool (DBConvert) and tried it.
I just picked up a random tool, this tool doesn't convert procedures, functions and triggers (I don't think it is a limit of the trial since there is not an explicit reference to sp, sf and triggers in the link above).
Anyway by trying that tool I had the message:
The DB cannot be converted
successfully because some FK names are
too long.
This is because in some tables I have FK whose description is > 32 chars.
Is this a real Firebird limit or it is possible to overcome it somehow (of course renaming the FK is an extreme option because it is extra work)?
Anyway how to convert a SQL Server database fully to Firebird? Is there a valid tool? Did someone succeed in a conversion of non trivial databases?
You can use some tools like Interbase Datapump and you can also check this
For size of FK : you have to rename them :(
You can also try to make this with Database Worbench
I doubt you'll be able to just "convert" that all. Firebird/Interbase and Microsoft SQL Server use quite different data types, their SQL language dialect is somewhat different, and so forth.
You could probably get a 60-80% conversion - but the rest will always be manual effort that's needed.
If your conversion fails just because of those FK constraints: drop those in SQL Server before the conversion, and re-create them on the Firebird side after conversion.
Or: drop them in SQL Server and re-create them with shorter names, and then do the conversion.
I know two more tools that might help you in the conversion.
The ESF Database Migration Toolkit
link text
and the DeZign for Databases
link text

SQL Server 2005 CodePage Issue

I have a problem I am trying to resolve. We have a SQL Server 2005 running a commercial ERP system. The implication for this is that we cannot change the database structure and all of the character fields are CHAR or VARCHAR rather than Unicode types (NCHAR, NVARCHAR).
We also have multiple instances of the ERP software, based on country. Each country has it's own database on the same database server, which results in variations in the table names based on the instance of the ERP software that is running. For example, the US customer table is called US_CUSTOMER and the UK one is GB_CUSTOMER. We have created a separate database that essentially mirrors the ERP system tables with synonyms, and then views that handle all of our SQL transactions against these synonyms. This was done to use LINQ TO SQL. Thanks for reading this far :)
The issue we have is we are now implementing Simplified Chinese for the application. In the customer ERP system, they set the code page for the ERP system so that when the ERP system writes to the base tables, the data is written as multi-byte. My question is how can I get this multi-byte information translated back to Simplified chinese? I would like to be able to do this at the database level, since I have both a web application and SSRS reports that need to take advantage of it.
Any ideas or directions? I don't think I can change the codepage, since multiple countries are using the same database server (though different databases).
Thanks ahead of time
Are we saying that 2 varchar characters are being using to store 1 unicode character?
If so, try CAST to binary to nvarchar etc (or something similar)
Otherwise, look at COLLATE clauses to coerce data?
Edit:
A CLR function might be your only bet to use Remus' suggestion of MultiByteToWideChar
What we ended up doing for this is writing a CLR function that can be called from our SQL statement. We pass in the string and the desired code page and get a converted string returned. The performance is not what we hoped for, but it seemed to be the only path we could find.

SQL Server Collation Conflict

Transferring data from one SQL server to another but when Schema is compared and syncronised the following error is received. We are using redgate SQL compare to complete.
Cannot resolve collation conflict for equal to operation
Base SQL server is SQL_Latin1_General_CP1_CI_AS and the destination server is Latin1_General_CI_AS
SQL Compare has an option to ignore collations. Look under the tab "options" in your compare project configuration.
is you problem with the SQL Compare utlity, or a worry that different server collations will lead to problems?
You could change the collation of the destination server to match the Base server
If that is not possible, then make the Collation of the databases on each server match, and then your only real problem is likely to be any temporary tables which you create (they will have a default collation matching the server / TEMPDB), and so long as you explicitly create the temporary table (i.e. don't create it using SELECT * INTO #TEMP FROM MyTable) and explicitly assign a collation to any varchar/text columns you should be OK
The way I overcome this is to generate the scripts via SQL Compare and then strip out (or replace) the Collation specific code. This is relatively fast and easy to do, and finally I manually apply the scripts to the destination server/ database.
Sounds like the collation settings for the server are different.
How are you transferring the data, do you perform a database restore on your new platform?
Either way, you need to ensure that the same collation is used on your new environment as is currently in place in your source environment.
Hope this makes sense, let me know if you need further assistance.
"Ignore collations" is definitely not going to work, for the reason stated above. The problem happens when migrating objects like views and stored procedures that use JOIN clauses on text fields that have differing collations.
If someone changes the default collation on the server and the column on the other side of the JOIN uses a specific collation, you've caused this issue. And it would happen in SQL Compare as well as if you just manually scripted the object in SSMS and moved it yourself.
There are two roads to fixing it - you could specify a COLLATE clause on the join and explicitly state the collation you want to use, or you could change the destination database default collation to match the source.
I'm afraid there is no SQL Compare "magic bullet" to solve this.

Resources