I have a backup of an SQL Server DB in .bak format which I've successfully managed to restore to a local instance of SQL Server Express. I now want to export both the structure and data in a format that MySQL will accept. The tools that I use for MySQL management typically allow me to import/export .sql files, but unfortunately Microsoft didn't see fit to make my life this easy!
I can't believe I'm the first to run into this, but Google hasn't been a great deal of help. Has anybody managed this before?
There will be 2 issues:
1) Datatypes. There isn't always a direct analog between an MS SQL type and a MySQL type. For example, MySQL handles timestamps very differently and has the cut-off for when you need to switch between varchar(n) and varchar(max)/text at a different value of n. There are also some small differences in the numeric types.
2) Query syntax. There are a few differences in the query syntax that, again, don't always have a 1:1 analog replacement. The one that comes to the top of my mind is SELECT TOP N * FROM T in MS SQL becomes SELECT * FROM T LIMIT N in MySQL (MySQL makes paging loads easier).
Related
Want to create a script to export Data and tables and views to a sql script.
I have SQL Server 2008 r2.
So far I've only been able to automatically generate an sqlscript for all tables and views. But the data wasn't included.
OR is there any easier way to export data, tables, views, from one SQL Server to my ISP's SQL server?
Regards
Tea
If for some reason a backup/restore won't work for you, SSMS' Generate Scripts tool includes an advanced scripting option to include data:
Here are some options to think over (prioritised in terms of what I would recommend):-
A simple backup and restore will be the easiest and quickest solution;
Using a data scripting tool (like Red-Gate's Data Compare) could solve your needs;
Use the database comparison as part of Visual Studio.
A SSIS package could be developed to pump data back and forth between the two instances; or
Write your own script using the SET IDENTITY INSERT ON / OFF command for the identity seeded tables
The easiest way to do this is to create a backup, copy the .bak file to the other server, and restore the backup there.
Like #jhewlett said that will be the best way to do it. to answer the question in the comment section. no it shouldn't be a problem. Just make sure that the SQL Server Versions are the same. Had a bit of an issue not to long ago where there were two pc's with different releases of the R2 installed and couldn't restore the backup. Other thing you can also do is to script the entire database with data, but this will not be recommended as it could take a long time to generate the script and for it to finish running on the other computer.
Or you can simply just stop the SQL server instance and copy the database away onto an external hard drive and re-attach it to the other server. just remember to start the instances after doing this step.
I use Navicat Premium for these kind of things in mysql. It generates sql from data, tables, views and anything else. It provides tools to copy or synchronize table from one database on different server or platforms as well. For example I use it so much to transfer my tables from MySQL to a SQLite database, So easy and fast. Otherwise I had to transfer it manually with so much trouble.
very good tool and required for any DB admin or programmer. It support MySQL, Oracle, MS SQL Server, PostgreSQL and SQLite.
To Generate a schema with data follow these steps.
Select database to generate a schema '>' right click '>' Tasks '>' Generate schema '>' click NEXT in popup window '>'
select DB object to generate schema and click NEXT '>'
Go to advance option and scroll down '>'
Find Type of data to script and select one option as you need. '>'
and then Next Next and finish it.
Enjoy it.
If you don't want to port all tables data (for example you need to port just some base data in special tables) scripting options is not useful for you. In this case you'll have two options. First is using some third parties tools such as Red-Gate and Second way is writing the script by yourself. I prefer Second option because except the expensive price of most of them i want to run just little script for little delete, update and inserting purpose. But the important problem is here: maybe the record count is too long to write scripts record by record. I Think the linked Server is good point to solve that. It's enough for describing Just Declare Linked Server as you see in Images and get new script in your source DB and write scripts with access to both source and destination DB. Attached image must be clear.
Create New Linked Server:
Write Destination SQL Server Address:
Fill Login Info:
Now you have Linked Server:
Write script and enjoy:
Hope this help.
I am developing an application in C# VS 2010 past 4 months. I used MS Access 2007 to store my nearly 20 tables successfully.
Today I realized that my data base cannot be handled consistently by MS Access 2007. Hence I decided to go for SQL Server 2008 R2 Express with Upsizing wizard and it worked really great!
However, when I tried to run various options of my already well developed application, It kept throwing error each time when a query is fired to SQL Server.
I understood that Many of the stuffs of SQL supported by MS Access are not supported by MS SQL Server
For example: query with date, for representing date format when we use '#', SQL Server 2008 won' t recognize it.
Also, for Bool value, MS Access stores it as True and False where as SQL Server uses 0.
These all queries worked perfect with Access 07
I am sure that there must be some method so that SQL Server can understand MS access queries.
Or will I have to edit my whole application?? It will be as good as digging a mine for earning gold..
I have changed all data access objects such as reader, adapter, command, connection to SQL data objects using System.Data.SqlClient.
So, It is not the problem.
Please help me asap.
Thank you.
You cannot force SQL Server to run the MS Access queries. These queries will need to be rewritten to use T-SQL instead of the query language that MS Access uses.
I feel your pain, I just had to rewrite a large MS Access application (over 1k queries) that needed to be recreated to be used in SQL Server.
There will be some queries that might be able to be ported over directly but as you noticed queries with date, and even some of the aggregate functions (First(), etc) are not used in SQL Server and those queries will need to be changed.
Here is a link with some info on converting Access to SQL
Converting Access Queries to SQL Server
You are right that, most of the time, you cannot just take the SQL of a query from Access and run it within SQL Server. It may work for very simple queries, but usually you need to tweak them.
There are a few steps I would take:
Extract your queries (which I presume are in your code), and re-create them in your Access database. Make sure they work there as normal Access queries.
(you can for instance simply add some code to your app to print all queries to files so you don't have to mess with parameters, then just copy/paste them in your Access DB).
The point is simply to have working queries within Access.
Use SSMA from Microsoft for helping you to move your queries to SQL Server. It does a good job of translating them into T-SQL.
You may still have to convert some troublesome queries by hand, but it shouldn't be that many and usually the conversion is not difficult.
Once converted to T-SQL, just re-inject these working queries into your code, or keep the complex queries in SQL Server as views (which it usually be faster as SQL Server will have already created its execution plan, rather than your application sending raw SQL that the server needs to analyse).
As you pointed out, there could be some issues if your fields use some features that don't cross-over to SQL Server properly.
Look at your tables in Access and do some cleanup before attempting to convert:
For booleans fields:
Make sure you set their default values to 0 or 1 (they should not be empty).
Required fields must be non-null:
Make sure that any fields that you have set as 'Required' does not contain any NULL values in its data.
Unique indexes cannot ignore Null:
Check that your indexes are not set to be both 'Unique' and 'Ignore null'.
All tables must have clean primary keys:
Make sure all your tables have a unique primary key that doesn't have Null values in their data.
I am working on a project which migrates databases from Oracle 10g to SQL Server 2008 using SSMA(SQL SERVER MIGRATION ASSISTANT). I want to know if there is a way to actually compare the data in tables that resides on a table space say 'A' on ORACLE with the corresponding migrated database 'A' on SQL SERVER.
I am not bothered about the data types of various columns right now.If there is a way to map it then it will be great. I am just concerned with the data difference if any that exists.
Let me know if you are aware of any such free tool which does so, or any of you have written a tool which can help me out to do the same.
Thanks !!
You will have to map the PK from the source to the destination and if the colu,ns are the same, fetch a bulk and compare...
Lots of hard work.
Maybe it will be better if you could count rows and verify a statistic group of records.
I am migrating data from a Oracle on VMS that accesses data on SQL Server using heterogeneous services (over ODBC) to Oracle on AIX accessing the SQL Server via Oracle Gateways (dg4msql). The Oracle VMS database used the WE8ISO8859P1 character set. The AIX database uses WE8MSWIN1252. The SQL Server database uses "Latin1-General, case-insensitive, accent-sensitive, kanatype-insensitive, width-insensitive for Unicode Data, SQL Server Sort Order 52 on Code Page 1252 for non-Unicode Data" according to sp_helpsort. The SQL Server databases uses nchar/nvarchar or all string columns.
In Application Express, extra characters are appearing in some cases, for example 123 shows up as %001%002%003. In sqlplus, things look ok but if I use Oracle functions like initcap, I see what appear as spaces between each letter of a string when I query the sql server database (using a database link). This did not occur under the old configuration.
I'm assuming the issue is that an nchar has extra bytes in it and the character set in Oracle can't convert it. It appears that the ODBC solution didn't support nchars so must have just cast them back to char and they showed up ok. I only need to view the sql server data so I'm open to any solution such as casting, but I haven't found anything that works.
Any ideas on how to deal with this? Should I be using a different character set in Oracle and if so, does that apply to all schemas since I only care about one of them.
Update: I think I can simplify this question. SQL Server table uses nchar. select dump(column) from table returns Typ=1 Len=6: 0,67,0,79,0,88 when the value is 'COX' whether I select from a remote link to sql server, cast the literal 'COX' to an nvarchar, or copy into an Oracle table as an nvarchar. But when I select the column itself it appears with extra spaces only when selecting from the remote sql server link. I can't understand why dump would return the same thing but not using dump would show different values. Any help is appreciated.
There is an incompatibility between Oracle Gateways and nchar on that particular version of SQL Server. The solution was to create views on the SQL Server side casting the nchars to varchars. Then I could select from the views via gateways and it handled the character sets correctly.
You might be interested in the Oracle NLS Lang FAQ
I am replacing an Access application with a web app, but the client is using SQL Server 2000, and I am using SQL Server 2008.
So, I have the database redesigned, with foreign keys, but now I need to get the data on the client's system.
Part of the problem is that they have images that are over 32k, so osql failed as the command buffer filled up.
I should be able to use osql to import the new schema at least, and perhaps all of the data except for the images.
The Export wizard just wouldn't work, even though I tried the Native SQL Driver and the OLE DB Sql Driver.
Flat files seems like a bad choice, as I don't know if it can do the images.
So, what is a good way to copy a 330M database from 2008 -> 2000?
Not sure about performance or time needed, but you could always try a tool like
Red-Gate SQL Compare / SQL Data Compare
Apex SQL Diff / SQL Data Diff
These will allow you to compare both the schema of two databases, as well as the data, and allow you to create synchronization scripts, or synchronize online.
Marc
I set the image column to null, which reduced the size of the insert statements.
This enabled me to import the data into the target database.