Migrate data from MariaDB to SQLServer - sql-server

We are planning to migrate all the data from MariaDB to SQLServer. Can anyone please suggest any approach to migrate the data so that no downtime is required as well as no data is lost.
In context of that, I have gone through a few posts here, but did not get much idea.

You could look into SQL Server Integration Services functionality for migrating your data.
Or you could manually create a migration script using a linked server in your new SQL Server instance.
Or you could use BCP to perform bulk imports (which is quite fast, but requires intermediate steps to put the data in text files).
What's more important is how you want to realize the "no downtime" requirement. I suppose the migration routines need some functional requirement, which might be difficult to implement with a general migration tool, like:
the possibility to perform the migration in multiple batches/runs (where already migrated data is skipped), and
the possibility to implement different phases of the migration in different solutions, like bulk imports (using text files and staging tables) for history data (which will not change anymore), but live queries over a live database connection for the latest updates in the MariaDB/MySQL database.
The migration strategy might also largely depend on the size of the data in MariaDB/MySQL, and the structure of the database(s) and its data. Perhaps you want to keep auto-generated primary key values, because the system requires them to remain unchanged. Perhaps you need to use different data types for some exotic table fields. Perhaps you need to re-implement some database logic (like stored procedures and functions). Etc. etc.
It is very difficult to give some ad-hoc advice about these kind of migration projects; as Tim Biegeleisen already commented, this can be quite a complex job, even for "small" databases. It practically always requires a lot of research, extensive preparations, test runs (in a testing environment using database backups), some more test runs, a final test run, etc. And - of course - some analytics, metrics, logging, and reporting for troubleshooting (and to know what to expect during the actual migration). If the migration will be long-running, you want to make sure it does not freeze the live production environment, and you might also want some form of progress indication during the migration.
And - last but not least - you surely want to have a "plan B" or a quick return strategy in case the actual migration will fail (despite all those careful preparations).
Hope I did not forget something... ;-)

Related

How to create production database sample for testing?

Is there a tool (for PosgreSQL, ideally), which can make a small, but consistent sample of a big database?
The this is we need a testing database, but we don't want to fully copy the production one. First, because it is too big and second, the nature of testing implies that the testing database will be recreated several times in the process.
Obviously, you can not simply take random rows from some tables, because this will violate the hell out of foreign keys and what not. So, I wonder is there a tool available that can do that?
What about generating some mock data with a tool like databene benerator, just as much as you want, and store them for reuse.
I once built such a tool for the IDMS system.
I was in the process of making it work for SQL systems too when the managers of the company we were mergered into told me I could not continue wasting my time on such futile and unnecessary pieces of software.
Until this day, I have still neither seen nor heard of any commercially available thing that matches what I achieved way back then.
Yes, I wrote a tool to do this: https://github.com/mla/pg_sample
From the README:
pg_sample is a utility for exporting a small, sample dataset from a
larger PostgreSQL database. The output and command-line options
closely resemble the pg_dump backup utility (although only the
plain-text format is supported).
The sample database produced includes all tables from the original,
maintains referential integrity, and supports circular dependencies.
You can use pg_dump --schema-only to dump only the schema of the database. Then use pg_restore to load the dump into a new database. From there you have a few options:
Create your data by hand; this will allow you to cover edge cases but will take a while if you want to test on a lot of data.
Script a few queries to import random sections of each table on your database. As you said, this will violate foreign key constraints, but when it does just ignore the failure. Keep track of the number of successes and keep going until you have as many data items as you want. This may not work depending on your schema, however, if you have very restrictive constraints, because it might take too long to hit on succeeding data.
Back in my Oracle days we would have a test database with a very small auto generated set of data. At the time it was about a 5th of the production database size. We would than copy the stats from the production database and put them into our test database to make it think it had billions of rows in tables when in reality it only had 500,000. This allowed us to get explain plans in test that would we would get in production. It has it's values, but doesn't solve all your question and I'm not sure how easy or even feasible it is to mess with PostgreSQL's stats.
You can create custom scripts to create meaningful copy of production data or use commercial products such as Informatica data subset or IBM optim
Please use IBM OPTIM to production database sample. If you have Prod data available in DB, go for OPTIM TDM or if there is no Prod data go for IBM OPTIM TDF (Test data Fabricator)

How do you manage databases during development?

My development team of four people has been facing this issue for some time now:
Sometimes we need to be working off the same set of data. So while we develop on our local computers, the dev database is connected to remotely.
However, sometimes we need to run operations on the db that will step on other developers' data, ie we break associations. For this a local db would be nice.
Is there a best practice for getting around this dilemma? Is there something like an "SCM for data" tool?
In a weird way, keeping a text file of SQL insert/delete/update queries in the git repo would be useful, but I think this could get very slow very quickly.
How do you guys deal with this?
You may find my question How Do You Build Your Database From Source Control useful.
Fundamentally, effective management of shared resources (like a database) is hard. It's hard because it requires balancing the needs of multiple people, including other developers, testers, project managers, etc.
Often, it's more effective to give individual developers their own sandboxed environment in which they can perform development and unit testing without affecting other developers or testers. This isn't a panacea though, because you now have to provide a mechanism to keep these multiple separate environments in sync with one another over time. You need to make sure that developers have a reasonable way of picking up each other changes (both data, schema, and code). This isn't necesarily easier. A good SCM practice can help, but it still requires a considerable level of cooperation and coordination to pull it off. Not only that, but providing each developer with their own copy of an entire environment can introduce costs for storage, and additional DBA resource to assist in the management and oversight of those environments.
Here are some ideas for you to consider:
Create a shared, public "environment whiteboard" (it could be electronic) where developers can easily see which environments are available and who is using them.
Identify an individual or group to own database resources. They are responsible for keeping track of environments, and helping resolve the conflicting needs of different groups (developers, testers, etc).
If time and budgets allow, consider creating sandbox environments for all of your developers.
If you don't already do so, consider separating developer "play areas", from your integration, testing, and acceptance testing environments.
Make sure you version control critical database objects - particularly those that change often like triggers, stored procedures, and views. You don't want to lose work if someone overwrites someone else's changes.
We use local developer databases and a single, master database for integration testing. We store creation scripts in SCM. One developer is responsible for updating the SQL scripts based on the "golden master" schema. A developer can make changes as necessary to their local database, populating as necessary from the data in the integration DB, using an import process, or generating data using a tool (Red Gate Data Generator, in our case). If necessary, developers wipe out their local copy and can refresh from the creation script and integration data as needed. Typically databases are only used for integration testing and we mock them out for unit tests so the amount of work keeping things synchronized is minimized.
I recommend that you take a look at Scott AllenĀ“s views on this matter. He wrote a series of blogs which are, in my opinion, excellent.
Three Rules for Database Work,
The Baseline,
Change scripts,
Views, stored procs etc,
Branching and Merging.
I use these guidelines more or less, with personal changes and they work.
In the past, I've dealt with this several ways.
One is the SQL Script repository that creates and populates the database. It's not a bad option at all and can keep everything in sync (even if you're not using this method, you should still maintain these scripts so that your DB is in Source Control).
The other (which I prefer) was having a single instance of a "clean" dev database on the server that nobody connected to. When developers needed to refresh their dev databases, they ran a SSIS package that copied the "clean" database onto their dev copy. We could then modify our dev databases as needed without stepping on the feet of other developers.
We have a database maintenance tool that we use that creates/updates our tables and our procs. we have a server that has an up-to-date database populated with data.
we keep local databases that we can play with as we choose, but when we need to go back to "baseline" we get a backup of the "master" from the server and restore it locally.
if/when we add columns/tables/procs we update the dbMaintenance tool which is kept in source control.
sometimes, its a pain, but it works reasonably well.
If you use an ORM such as nHibernate, create a script that generate both the schema & the data in the LOCAL development database of your developers.
Improve that script during the development to include typical data.
Test on a staging database before deployment.
We do replicate production database to UAT database for the end users. That database is not accessible by developers.
It takes less than few seconds to drop all tables, create them again and inject test data.
If you are using an ORM that generates the schema, you don't have to maintain the creation script.
Previously, I worked on a product that was data warehouse-related, and designed to be installed at client sites if desired. Consequently, the software knew how to go about "installation" (mainly creation of the required database schema and population of static data such as currency/country codes, etc.).
Because we had this information in the code itself, and because we had pluggable SQL adapters, it was trivial to get this code to work with an in-memory database (we used HSQL). Consequently we did most of our actual development work and performance testing against "real" local servers (Oracle or SQL Server), but all of the unit testing and other automated tasks against process-specific in-memory DBs.
We were quite fortunate in this respect that if there was a change to the centralised static data, we needed to include it in the upgrade part of the installation instructions, so by default it was stored in the SCM repository, checked out by the developers and installed as part of their normal workflow. On reflection this is very similar to your proposed DB changelog idea, except a little more formalised and with a domain-specific abstraction layer around it.
This scheme worked very well, because anyone could build a fully working DB with up-to-date static data in a few minutes, without stepping on anyone else's toes. I couldn't say if it's worthwhile if you don't need the install/upgrade functionality, but I would consider it anyway because it made the database dependency completely painless.
What about this approach:
Maintain a separate repo for a "clean db". The repo will be a sql file with table creates/inserts, etc.
Using Rails (I'm sure could be adapted for any git repo), maintain the "clean db" as a submodule within the application. Write a script (rake task, perhaps) that queries a local dev db with the SQL statements.
To clean your local db (and replace with fresh data):
git submodule init
git submodule update
then
rake dev_db:update ......... (or something like that!)
I've done one of two things. In both cases, developers working on code that might conflict with others run their own database locally, or get a separate instance on the dev database server.
Similar to what #tvanfosson recommended, you keep a set of SQL scripts that can build the database from scratch, or
On a well defined, regular basis, all of the developer databases are overwritten with a copy of production data, or with a scaled down/deidentified copy of production, depending on what kind of data we're using.
I would agree with all the LBushkin has said in his answer. If you're using SQL Server, we've got a solution here at Red Gate that should allow you to easily share changes between multiple development environments.
http://www.red-gate.com/products/sql_source_control/index.htm
If there are storage concerns that make it hard for your DBA to allow multiple development environments, Red Gate has a solution for this. With Red Gate's HyperBac technology you can create virtual databases for each developer. These appear to be exactly the same as ordinary database, but in the background, the common data is being shared between the different databases. This allows developers to have their own databases without taking up an impractical amount of storage space on your SQL Server.

SQL Server - Production DB Schema vs. Reporting DB Schema. Should they be the same?

We recently put a new production database into use. The schema of this database is optimized for OLTP. We're also getting ready to implement a reporting server to be used for reporting purposes. I'm not convinced we should just blindly use the same schema for our reporting database as we do for our production database, and replicate data over.
For those of you that have dealt with having separate production and reporting databases, have you chosen to use the same database schema for your reporting database, or a schema that is more efficient for reporting; for example, perhaps something more denormalized?
Thanks for thoughts on this.
There's really two sides to the story:
if you keep the schema identical, then updating the reporting database from the production is a simple copy (or MERGE in SQL Server 2008) command. On the other hand, the reports might get a bit harder to write, and might not perform optimally
if you devise a separate reporting schema, you can optimize it for reporting needs - then the creation of new reports might be easier and faster, and the reports should perform better. BUT: The updating is going to be harder
So it really boils down to: are you going to create a lot of reports? If so: I'd recommend coming up with a specific reporting schema optimized for reports.
Or is the main pain point the upgrade? If you can define and implement that once (e.g. with SQL Server Integration Services), maybe that's not really going to be a big issue after all?
Typically, chances are that you'll be creating a lot of reports of time, so there's a good chance it might be beneficial in the long run to invest a bit upfront in a separate reporting schema, and a data loading process (typically using SSIS) and then reap the benefit of having better performing reports and faster report creation time.
I think that the reporting database schema should be optimized for reporting - so you'll need a ETL Process to load your data. In my experience I was quickly at the point that the production schema does not fit my reporting needs.
If you are starting your reporting project I would suggest that you design your reporting database for your reports needs.
For serious reporting, usually you create data warehouse (Which is typically at least somewhat denormalized and certain types of calculations are done when the data is refreshed to save from averaging the values of 1.3 million records when you run the report. This is for the kind of reporting reporting that includes a lot of aggregate data.
If your reporting needs are not that great a replicated database might work. It may also depend on how up-to-date you need the data to be as data warehouses are typically updated once or twice a day so the reporting data is often one day behind, OK for monthly and quarterly reports not so good to see how many widgits have been ordered so far today.
The determinate of whether you need a data warehouse tends to be how long it would take to runthe reports they need. This is why datawarehouse pre-aggregate data on loading it. IF your reoports are running fine and you just want to get the worokload away from the input workload a replicated adatabase should do the trick. If you are trying to do math on all the records for the last ten years, you need a data warehouse.
You could do this in steps too. Do the replication now, to get reporting away from data input. That should be an immediate improvement (even if not as much as you want), then design and implement the datawarehouse (which can be a fairly long and involved project and which will take some time to get right).
It's easiest just to copy over.
You could add some views to that schema to simplify queries - to conceptually denormalize.
If you want to go the full Data Warehouse/Analysis Services route, it will be quite a bit of work. But it's very fast, takes up less space, and users seem to like it. If you're concerned about large amounts of data and response times, you should look into this.
If you have many many tables being joined, you might look into actually denormalizing the data. I'd do a test case just to see how much gain for pain you'll be getting.
Without going directly for the data warehouse solution you could always put together some views that rearrange data for better reporting access. This helps you in that you don't have to start a large warehouse project right away and could help scope out a warehouse project if you decide to go that way.
All the answers I've read here are good, I would just add that you do this in stages, stopping as soon as your goals for performance and functionality are met:
Keep the schema identical - this just takes contention and load off the OLTP server
Keep the schema identical - but add new indexed views OR index base tables differently
Build a partial data-warehouse style model (perhaps not keeping snapshot-style history or slowly changing dimensions or anything special not catered for in your normal database) from the copy-schema in another schema or database on the same reporting server. The benefits of star-schema models are huge for reporting, views flattened for users and data dictionaries etc. In this model, if your OLTP database loses changes (for instance customer name changes) due to overwrites, the data warehouse doesn't capture that information (often it's not that important if you stop at this spot). Effectively you are getting data warehouse-style organization for "current" data only. The benefits of retaining the copy of the original schema on your reporting server at this point are that you can pull from the source data in original SQL Server form instead of some kind of intermediate form (like text files) without affecting production OLTP, and you can migrate data models gradually, some in stars, some in normal form, all without affecting production. At some point later, you might be able to drop all or part of the copy.
Build a full data-warehouse including slowly changing dimensions where all the data is captured from the source system.

Step-by-step instructions for updating an (SQL Server) database?

Just a question about best-practices when upgrading an existing database. Assuming there will be all kinds of modifications to the data itself, the structure, the relations, additional columns, disappearing columns and whatever more.
My problem is a simple one. I'm working on a project that will use SQL Server. No problem there, since I'm enough of an expert to handle this. But this project will be upgraded later on and I need to specify a protocol that needs to be followed by the upgrade mechanism. Basically, this protocol needs to be followed when creating upgrade scripts...
Right now, I have these simple steps:
Add the new columns to the tables.
Add constraints to the new columns.
Add new tables.
Drop constraints where needed.
Drop columns that need to be removed.
Drop tables that need to be removed.
Somehow, this list feels incomplete. Is there a more extended list somewhere describing the proper steps which needs to be followed during an upgrade?
Also, is it always possible to do a complete upgrade within a single database transaction (with SQL Server) or are there breakpoints that need to be included within the protocol where one transaction should end and another one starts?While automated tools will provide a nice, automated solution, I still can't really use them. The development team working on this system has 4 developers, each with their own database on their local system. Every developer keeps track of their own updates to the structure and keeps track of them by generating both an Upgrade and Downgrade script for his own modifications, both for structural changes and data changes. These scripts can then be used by the other developers to keep their own system up-to-date. Whenever the system is going to be released, those scripts are all merged into one big script.
The system does not include any stored procedures or other "special" features. The database is just that: a data storage with just tables and relations between them. No roles, no users, no stored procedures, no triggers, no complex datatypes...The DB is used by an application where users work from 9-to-5 so shutting down can be done easily, including upgrades for the clients. We also add a version number to the database and applications will check if they're linked to the correct database version.
During development, all developers use their own database instance, which they can fully control. Since we're not the ones who use the application, we tend to develop for the Express edition, not any more expensive one. To be honest, we don't develop our application to support a lot of users, but we'll inform our users that since it uses SQL Server, they could install the system on a bigger SQL Server platform, according to their own needs. They will need their own DBA for this, though. We do have a bigger SQL Server available for ourselves, which we also use for our own web interface, but this server is located in a special dataserver where it is being maintained for us, not by us.
The project previously used MS Access for it's data storage and was intended for single-user development, but as it turned out, many users still decided to share their databases and this had shown that the datamodel itself is reliable enough for multi-user environments. So we migrated to SQL Server to support smaller offices with 3 or more users and some big organisation who will have 500 or more users at the same time.
Since we need to keep the cost of the software low, we don't have a big budget to spend on expensive tools or a more expensive server.
Check out Red-Gate's SQL Compare (structure comparison), SQL Data Compare (data comparison), and SQL Packager (for packaging up updates scripts into a C# project or a .NET executable).
They provide a nice, clean, fully functional and easy-to-use solution for all your database upgrade needs. They're well worth their license fees - that pays for itself in a few weeks or months.
Highly recommended!
In my opinion, it's an absolute bear doing these manually. For Microsoft SQL Server, I'd recommend using the Database editiion of Team System, since it includes complete source control capabilities for your database, and can automatically build your scripts for upgrading/downgrading versions.
Another option is SQLCompare with Redgate, which can also handle these kinds of upgrades/downgrades, and will result in a very nice SQL script. I've used both, and keeping the historic scripts has helped us troubleshoot issues and resolve many a mystery.
If you are working with a manual script as above, don't forget to also account for SP changes in your scripts. Also, any hand-edited script should be able to be executed multiple times on a database - i.e. if your script includes a table creation or drop, be sure to check for existance first, otherwise your script will fail if executed back to back.
Again, while it's possible to build a manual protocol I'd still fall back on using one of the purpose-built tools out there, and both Team System and SQL Compare will be able to output scripts that you could include as part of an installation/upgrade package.
With database updates I always believe it should be all or nothing. If any of the DB updates fail your application will be left in an unknown state that could be harmful to the data so I think it is best practice to either apply them all or none (1 transaction around them all).
I also like to backup the database before applying updates so that if anything does go wrong the database can be rolled back (this has saved me numerous times when working with live data).
Hope this helps.
Best practices for upgrading a production database schema actually look pretty bad on the surface. Unless you can completely shut down your system for the upgrade, which is often not possible, your changes all need to be backwards compatible. If you have many clients accessing the database, you can't update them all simultaneously, so any schema changes you make need to allow old code to run.
That means never renaming a column, and making all new columns nullable. This doesn't mean you leave it like that forever. You write two scripts, one for the initial change, which is backwards compatible, then another to clean things up after all clients have been updated.
Automated tools are great for validation of schemas, but they are not so good when it comes to actually modifying a complex system. You should break your changes up into many small, discrete change scripts so each can be run manually. If there's a failure, it's easier to pinpoint the cause and fix it. Basically, each feature gets its own script. Give each a unique name and then store that name in the database itself when you run the script so you can query the database to find out what's been run and what hasn't. This is invaluable when you have instances on developer's machines, test servers, production, etc.

When is it time to change database backends?

Is there a general rule of thumb to follow when storing web application data to know what database backend should be used? Is the number of hits per day, number of rows of data, or other metrics that I should consider when choosing?
My initial idea is that the order for this would look something like the following (but not necessarily, which is why I'm asking the question).
Flat Files
BDB
SQLite
MySQL
PostgreSQL
SQL Server
Oracle
It's not quite that easy. The only general rule of thumb is that you should look for another solution when the current one can't keep up anymore. That could include using different software (not necessarily in any globally fixed order), hardware or architecture.
You will probably get a lot more benefit out of caching data using something like memcached than switching to another random storage backend.
If you think you are going to ever need one of the heavyweights (SqlServer, Oracle), you should start with one of those at the beginning. Data migrations are extremely difficult. In the long run it will cost you less to just start at the top and stay there.
I think you're being overly specific in your rankings. You can pretty much start with flat files and the like for very small data sets, go up to something like DBM for slightly bigger ones that don't require SQL-like syntax, and go to some kind of SQL database after that.
But who wants to do all that rewriting? If the application will benefit from access to joins, stored procedures, triggers, foreign key validation, and the like--just use a SQL database regardless of the dataset size.
Which one should depend more on the client's existing installations and what DBA skills are available than on the amount of data you're holding.
In other words, the size of your database is far from the only consideration, and maybe not the most important one.
There is no blanket answer to this, but ALMOST always, using flat files is not a good idea. You have to parse through them (i suppose) and they do not scale well. Starting with a proper database, like Oracle or SQL Server (or MySQL, Postgres if you are looking for free options) is a good idea. For very little overhead, you will save yourself a lot of effort and headache later on. They also allow you to structure your data in a non-stupid fashion, leaving you free to think of WHAT you will do with the data rather than HOW you will be getting it in/out.
It really depends on your data, and how you intend to use it. At one of my previous positions, we used Postgres due to the native geo-location and timezone extensions which existed because it allowed us to manage our data using polygonal datatypes. For us, we needed to do that, and we also wanted to use stored procedures, views and the like.
Now, another place I worked at used MySQL simply because the data was normalized, standard row by row data.
SQL Server, for a long time, had a 4gb database limit (see SQL Server 2000), but despite that limitation it remains a very stable platform for small to medium applications for which the old data is purged.
Now, from working with Oracle and SQL Server 05/08, all I can tell you is that if you want the creme of the crop for stability, scalability and flexibility, then these two are your best bet. For enterprise applications, I strongly recommend them (merely because that's what we use where I work now).
Other things to consider:
Language integration (ASP.NET session storage, role management, etc.)
Query types (Select, Update, Delete) [Although this is more of a schema design issue, not a DBMS issue)
Data storage requirements
Your application's utilization of the database is the most critical ones. Mainly what queries are used most often (SELECT, INSERT or UPDATE)?
Say if you use SQLite, it is gears for smaller application but for "web" application you might a bigger one like MySQL or SQL Server.
The way you write scripts and your web application platforms also matters. If you're developing on a Microsoft platform, then SQL Server is a better alternative.
Typically, I go with what is commonly accepted by whichever framework I am using. So, if I'm doing .NET => SQL Server, Python (via Django or Pylons) => MySQL or SQLite.
I almost never use flat files though.
There is more to choosing an RDBMS solution that just "back end horsepower". The ability to have commitment control, for example, so you can roll back a failed transaction is one. reason.
Unless you are in the megatransaction rate application, most database engines would be adequate - so it becomes a question of how much you want to pay for the software, whether it runs on the hardware and operating system environment you want, and what expertise you have in managing that software.
That progression sounds painful. If you're going to include MS products (especially the for-pay SQL Server) in there anywhere, you may as well use the whole stack, since you only have to pay for the last of these:
SQL Server Compact -> SQL Server Express -> SQL Server Enterprise (clustered).
If you target your app at SQL Server Compact initially, all your SQL code is guaranteed to scale up to the next version without modification. If you get bigger than SQL Server Enterprise, then congratulations. That's what they call a good problem to have.
Also: go back and check the SO podcasts. I believe they talked about this briefly.
This question depends on your situation really.
If you have control over the server you're deploying to and you can install whatever services you need, then the time to install a MySql or MSSQL Express server and code against an existing database framework VERSUS coding against flat file structure is not worth the effort of considering.
What about FireBird? Where would that fit into that list?
And lets not forget the requirements that the "customer" of your solution must also have in place. If your writing a commercial application for a small companies, then Oracle might not be a good choice... but if your writing a customized solution for a large enterprise which must share data among multiple campuses, and has a good sized IT department then the decision of Oracle vs Sql Server would come down to what does the customer most likely already have deployed.
Data migration nowdays isn't that bad since we have those great tools from Embarcadero, so I would instead let the customer needs drive the decision.
If you have the option SQL Server is a good choice from the word go, predominantly because you have access to solid procedures and functions and the database backup facilities are totally reliable. Wrapping up as much as your logic as you can inside the database itself (rather than in whatever language you are using) helps security and performance - indeed there's an good argument to be made for always using procedures for insert/update logic as these make you invulnerable to injection attacks.
If I have the choice the only time I'd consider MySQL in preference is with a large, fairly simple, database predominantly used for read access. This isn't to decry MySQL which has improved markedly of late and I happily use if I don't have the choice, but for more complex systems with update/insert activity MSSQL is generally the superior option.
I think your list is subjective but I will play your game.
Flat Files
BDB
SQLite
MySQL
PostgreSQL
SQL Server
Oracle
Teradata

Resources