Copy table schema from DB2 to SQL Server - sql-server

I'm looking at creating staging tables in SQL Server for one of our SSIS packages to reduce the number of calls to DB2 since calls to DB2 may experience timeouts when DB2 recycles inactive connections. Is there an automated method for copying table schema from DB2 to SQL Server? There would need to be a 1 to 1 mapping of data types between DB2 and SQL Server for this to work. If there isn't a tool that exists, I may write one myself since some of our DB2 tables have 20+ columns and it would be a pain to manually recreate in SQL Server.

I have a partially working script you're welcome to use. We don't care about primary keys and such from DB2 into our SQL Server side of things. Our only concern is to get the data over. Plus, the data I've had to deal with was only string or date based so where I build the data_type might be incorrect for the decimal.
The core concept is that I inspect the sysibm.syscolumns to derive a list of all the tables and columns and then try to provide a translation between the DB2 data types and SQL Server.
Anyways, give it a shot. Feel free to edit or make a comment about what's broken and I'll see if I can fix it.
This is built using a mix of the SQL Server 2012 CONCAT function and the classic string concatenation operator +. It also assumes a Linked server exists for the OPENQUERY to work.
WITH SRC AS
(
SELECT
OQ.NAME AS column_name
, OQ.TBNAME AS table_name
--, RTRIM(OQ.COLTYPE) AS data_type
, CASE RTRIM(OQ.COLTYPE)
WHEN 'INTEGER' THEN 'int'
WHEN 'SMALLINT' THEN 'smallint'
WHEN 'FLOAT' THEN 'float'
WHEN 'CHAR' THEN CONCAT('char', '(', OQ.LENGTH, ')')
WHEN 'VARCHAR' THEN CONCAT('varchar', '(', OQ.LENGTH, ')')
WHEN 'LONGVAR' THEN CONCAT('varchar', '(', OQ.LENGTH, ')')
WHEN 'DECIMAL' THEN CONCAT('decimal', '(', OQ.SCALE, ')')
WHEN 'DATE' THEN 'date'
WHEN 'TIME' THEN 'time'
WHEN 'TIMESTMP' THEN ''
WHEN 'TIMESTZ' THEN ''
WHEN 'BLOB' THEN ''
WHEN 'CLOB' THEN ''
WHEN 'DBCLOB' THEN ''
WHEN 'ROWID' THEN ''
WHEN 'DISTINCT' THEN ''
WHEN 'XML' THEN ''
WHEN 'BIGINT' THEN ''
WHEN 'BINARY' THEN ''
WHEN 'VARBIN' THEN ''
WHEN 'DECFLOAT' THEN ''
ELSE ''
END AS data_type
, OQ.LENGTH
, OQ.SCALE
, CONCAT(CASE OQ.NULLS WHEN 'Y' THEN 'NOT' ELSE '' END, ' NULL') AS allows_nulls
, OQ.UPDATES AS updateable
FROM
OPENQUERY(LINKED, 'SELECT * FROM abcde01.sysibm.syscolumns T WHERE T.TBCREATOR = ''ABCD'' ' ) AS OQ
)
, S2 AS
(
SELECT
CONCAT(QUOTENAME(S.column_name), ' ', S.data_type, ' ', S.allows_nulls) AS ColumnDeclaration
, S.table_name
FROM
SRC AS S
)
, MakeItPretty AS
(
SELECT DISTINCT
QUOTENAME(S.TABLE_NAME) AS TABLE_NAME
, STUFF
(
(
SELECT ',' + ColumnDeclaration
FROM S2 AS SI
WHERE
SI.TABLE_NAME = S.TABLE_NAME
FOR XML PATH('')),1,1,''
) AS column_list
FROM
S2 AS S
)
SELECT
CONCAT('CREATE TABLE ', MP.TABLE_NAME, char(13), MP.column_list) AS TableScript
FROM
MakeItPretty AS MP;

Related

SQL Server Linked Server openquery for WHERE clause

I am trying to write a query to get the delta between my server and Linked Server. So I wrote the following which worked until:
Select MD5 FROM ServerA A
WHERE NOT EXISTS
(
SELECT 1
FROM [LinkedServer].[Database].[dbo].[Files] fi
WHERE A.MD5High = fi.MD5High AND A.MD5Low = fi.MD5Low
)
However this pulls all the data from LinkedServer to my server which causes my server to run out of resources. So my next attempt was to perform the query on the linked server
Select MD5 FROM ServerA A
WHERE NOT EXISTS
(
SELECT 1
FROM OPENQUERY( [LinkedServer] , 'SELECT [LinkedServer].[Database].[dbo].[Files] fi
WHERE fi.MD5High = ' + A.MD5Low + ' AND fi.MD5Low= ' + a.MD5Low + '')
)
However the syntax in the second query is incorrect and I can't figure the correct syntax.
I appreciate any direction

Katalon Studio SQL Query case statement

I'm working with Katalon Studio for web testing automation.
I'm using data from database for user login, and I had to difference from ORMA_PROFESSIONALS.PROF_SURNAME_2 that are written with a white space. But when I use this query in Katalon Studio SLQ Query, I get that message below.
I use the same Query in SQL Server Management 2016 and I don't recieve any problems.
SELECT SECU_USERS.USER_LOGIN as login,
ORMA_PROFESSIONALS.PROF_NAME
+' '+ ORMA_PROFESSIONALS.PROF_SURNAME_1
+''+case ORMA_PROFESSIONALS.PROF_SURNAME_2
when '' then ''
else ' '+ORMA_PROFESSIONALS.PROF_SURNAME_2
end
as name
FROM SECU_USERS
inner join ORMA_PROFESSIONALS on
SECU_USERS.PROF_ID=ORMA_PROFESSIONALS.PROF_ID
Error:
com.microsoft.sqlserver.jdbc.SQLServerException: Unable to identify the table SELECT SECU_USERS.USER_LOGIN as login,
ORMA_PROFESSIONALS.PROF_NAME
+' '+ ORMA_PROFESSIONALS.PROF_SURNAME_1
+''+case ORMA_PROFESSIONALS.PROF_SURNAME_2
when '' then ''
else ' '+ORMA_PROFESSIONALS.PROF_SURNAME_2
end
as name
FROM SECU_USERS
inner join ORMA_PROFESSIONALS on
SECU_USERS.PROF_ID=ORMA_PROFESSIONALS.PROF_ID para los metadatos.
Query: SELECT SECU_USERS.USER_LOGIN as login,
ORMA_PROFESSIONALS.PROF_NAME
+' '+ ORMA_PROFESSIONALS.PROF_SURNAME_1
+''+case ORMA_PROFESSIONALS.PROF_SURNAME_2
when '' then ''
else ' '+ORMA_PROFESSIONALS.PROF_SURNAME_2
end
as name
FROM SECU_USERS
inner join ORMA_PROFESSIONALS on
SECU_USERS.PROF_ID=ORMA_PROFESSIONALS.PROF_ID Parameters: []

How to generate Reshift compatible table schema from a SQL Server table

I'm trying to create an automated pipeline to transfer data from a SQL Server table to my Redshift database. I need to do this for several tables, currently in SQL Server.
The process I'm doing for it is:
Automatically export data from the SQL Server table as a CSV (into a folder that is mapped to AWS S3 bucket) using a .bat script.
Write a Lambda function to watch for the file in the S3 bucket load it into the Redshift table and then remove the file from the bucket after completion.
The above will be a daily dump loading only the new records since the last dump. Now, before getting this pipeline going, I want to know:
Is it possible to automatically create the table in my Redshift database using the SQL Server table?? OR something that will generate a create table definition compatible for the Redshift table from the SQL Server table?? Since, I need to do it for multiple tables and the tables are really huge, I don't want to be manually doing "CREATE TABLE .." for each of them in Redshift.
Please help!!
This is exactly the use-case for AWS Database Migration Service, which can do an initial migration but can also perform on-going incremental loads (but requires a DMS server to be running).
See:
Using a Microsoft SQL Server Database as a Source for AWS DMS - AWS Database Migration Service
Using an Amazon Redshift Database as a Target for AWS Database Migration Service - AWS Database Migration Service
To create the equivalent schema in Amazon Redshift, you can use the AWS Schema Conversion Tool, which will convert an existing database schema from one database engine to another
As John mentioned, AWS Database Migration Service will be the best way to replicate a table from one source DB to other target DB.
If you are still just looking for getting equivalent redshift create table DDL you can use metadata tables to get that. I did same for Oracle to redshift and looped this for all the tables:
WITH COLUMN_DEFINITION AS (
SELECT
TABLE_NAME,
COLUMN_NAME,
CASE
WHEN (DATA_TYPE= 'NUMBER' AND DATA_SCALE = 0 AND DATA_PRECISION <= 9) THEN 'INTEGER'
WHEN (DATA_TYPE= 'NUMBER' AND DATA_SCALE = 0 AND DATA_PRECISION <= 18) THEN 'BIGINT'
WHEN (DATA_TYPE= 'NUMBER' AND DATA_SCALE = 0 AND DATA_PRECISION >= 19) THEN 'DECIMAL(' || DATA_PRECISION || ',0)'
WHEN (DATA_TYPE= 'NUMBER' AND DATA_SCALE > 0) THEN 'DECIMAL(' || DATA_PRECISION || ',' || DATA_SCALE ||')'
WHEN (DATA_TYPE= 'NUMBER' AND nvl(DATA_SCALE,0) = 0 AND nvl(DATA_PRECISION,0) = 0) THEN 'DECIMAL(38,18)'
WHEN DATA_TYPE= 'CHAR' THEN 'VARCHAR(' || DATA_LENGTH || ')'
WHEN DATA_TYPE= 'VARCHAR' THEN 'VARCHAR(' || DATA_LENGTH || ')'
WHEN DATA_TYPE= 'VARCHAR2' THEN 'VARCHAR(' || DATA_LENGTH || ')'
WHEN DATA_TYPE= 'DATE' THEN 'TIMESTAMP'
WHEN DATA_TYPE= 'DATETIME' THEN 'TIMESTAMP'
WHEN DATA_TYPE LIKE 'TIMESTAMP%' THEN 'TIMESTAMP'
WHEN DATA_TYPE= 'LONG' THEN 'TEXT'
WHEN DATA_TYPE= 'CLOB' THEN 'TEXT'
WHEN DATA_TYPE LIKE '%RAW%' THEN 'TEXT'
WHEN DATA_TYPE= 'NCHAR' THEN 'NCHAR(' || DATA_LENGTH || ')'
WHEN DATA_TYPE= 'NVARCHAR' THEN 'NVARCHAR(' || DATA_LENGTH || ')'
ELSE DATA_TYPE || '(' || DATA_LENGTH || ')'
END AS REDSHIFT_COLUMN_DEFINITION
FROM ALL_TAB_COLUMNS
WHERE LOWER(OWNER)= LOWER('<schma_name>') AND LOWER(TABLE_NAME) in
LOWER('<Table name>')
ORDER BY COLUMN_ID
)
SELECT 'drop table if exists ' || LOWER(MAX(TABLE_NAME)) || ' cascade; '
FROM column_definition
UNION ALL
SELECT 'create table '|| LOWER(MAX(TABLE_NAME)) || ' (' AS TEXT FROM
COLUMN_DEFINITION
UNION ALL
SELECT ' '|| LOWER(COLUMN_NAME) ||' '|| REDSHIFT_COLUMN_DEFINITION
|| ', ' AS TEXT FROM COLUMN_DEFINITION
UNION ALL
SELECT ' );' AS TEXT FROM DUAL

SQL Server trigger reports 'inserted' table is missing

I am running a script against a SQL Server 2016 database that creates various tables, views and triggers. This same script has been working against dozens of other servers but it is getting an error against this one particular server.
All of the triggers seem to be created but when I check for invalid objects it reports all of them as invalid. The really strange part is, it says the problem is the "inserted" table (or "deleted" table, depending on the trigger) is missing.
I am checking for invalid objects using this query:
SELECT
QuoteName(OBJECT_SCHEMA_NAME(referencing_id)) + '.'
+ QuoteName(OBJECT_NAME(referencing_id)) AS ProblemObject,
o.type_desc,
ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name) AS MissingReferencedObject
FROM
sys.sql_expression_dependencies sed
LEFT JOIN
sys.objects o ON sed.referencing_id = o.object_id
WHERE
(is_ambiguous = 0)
AND (OBJECT_ID(ISNULL(QuoteName(referenced_server_name) + '.', '')
+ ISNULL(QuoteName(referenced_database_name) + '.', '')
+ ISNULL(QuoteName(referenced_schema_name) + '.', '')
+ QuoteName(referenced_entity_name)) IS NULL)
ORDER BY
ProblemObject,
MissingReferencedObject
which I got from here
Find broken objects in SQL Server
The triggers are "instead of" triggers against the views that then modify the underlying tables. There is really nothing special about any of them.
Is there something wrong with my query and the objects aren't really invalid or is there something with the database? I am running the script as the database owner so I don't think it is a permissions issue.
Thanks
Run this query against the problem SQL Server, and against one where no problems are shown:
SELECT *
FROM
sys.sql_expression_dependencies sed
LEFT JOIN
sys.objects o ON sed.referencing_id = o.object_id
WHERE type = 'TR'
ORDER BY
type_desc,
referenced_entity_name
If the inserted / deleted tables show up in one list but not the other, then there is some difference causing this. I'd check identifier quoting rules to start.

Compare structures of two databases?

I wanted to ask whether it is possible to compare the complete database structure of two huge databases.
We have two databases, the one is a development database, the other a production database.
I've sometimes forgotten to make changes in to the production database, before we released some parts of our code, which results that the production database doesn't have the same structure, so if we release something we got some errors.
Is there a way to compare the two, or synchronize?
For MySQL database you can compare view and tables (column name and column type) using this query:
SET #firstDatabaseName = '[first database name]';
SET #secondDatabaseName = '[second database name]';
SELECT * FROM
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #firstDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t1
LEFT JOIN
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #secondDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t2 ON t1.tableRowType = t2.tableRowType
WHERE
t2.tableRowType IS NULL
UNION
SELECT * FROM
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #firstDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t1
RIGHT JOIN
(SELECT
CONCAT(cl.TABLE_NAME, ' [', cl.COLUMN_NAME, ', ', cl.COLUMN_TYPE, ']') tableRowType
FROM information_schema.columns cl, information_schema.TABLES ss
WHERE
cl.TABLE_NAME = ss.TABLE_NAME AND
cl.TABLE_SCHEMA = #secondDatabaseName AND
ss.TABLE_TYPE IN('BASE TABLE', 'VIEW')
ORDER BY
cl.table_name ) AS t2 ON t1.tableRowType = t2.tableRowType
WHERE
t1.tableRowType IS NULL;
If you prefer using tool with UI you can also use this script
https://github.com/dlevsha/compalex
which can compare tables, views, keys etc.
Compalex is a lightweight script to compare two database schemas. It
supports MySQL, MS SQL Server and PostgreSQL.
Screenshot (compare tables)
You can use the command line:
mysqldump --skip-comments --skip-extended-insert -d --no-data -u root -p dbName1>file1.sql
mysqldump --skip-comments --skip-extended-insert -d --no-data -u root -p dbName2>file2.sql
diff file1.sql file2.sql
You can just dump them with --no-data and compare the files.
Remember to use the --lock-tables=0 option on your production database to avoid the big nasty global lock.
If you use the same mysqldump version (your dev and production systems should have the same software, right?) then you'll expect to get more-or-less identical files out. The tables will be in alpha order so a simple diff will show discrepancies up easily.
To answer this kind of question currently, I've made a script that uses information_schema content to compare column, datatype, and table
SET #database_current = '<production>';
SET #database_dev = '<development>';
-- column and datatype comparison
SELECT a.TABLE_NAME, a.COLUMN_NAME, a.DATA_TYPE, a.CHARACTER_MAXIMUM_LENGTH,
b.COLUMN_NAME, b.DATA_TYPE, b.CHARACTER_MAXIMUM_LENGTH
FROM information_schema.COLUMNS a
LEFT JOIN information_schema.COLUMNS b ON b.COLUMN_NAME = a.COLUMN_NAME
AND b.TABLE_NAME = a.TABLE_NAME
AND b.TABLE_SCHEMA = #database_current
WHERE a.TABLE_SCHEMA = #database_dev
AND (
b.COLUMN_NAME IS NULL
OR b.COLUMN_NAME != a.COLUMN_NAME
OR b.DATA_TYPE != a.DATA_TYPE
OR b.CHARACTER_MAXIMUM_LENGTH != a.CHARACTER_MAXIMUM_LENGTH
);
-- table comparison
SELECT a.TABLE_SCHEMA, a.TABLE_NAME, b.TABLE_NAME
FROM information_schema.TABLES a
LEFT JOIN information_schema.TABLES b ON b.TABLE_NAME = a.TABLE_NAME
AND b.TABLE_SCHEMA = #database_current
WHERE a.TABLE_SCHEMA = #database_dev
AND (
b.TABLE_NAME IS NULL
OR b.TABLE_NAME != a.TABLE_NAME
);
Hope this script can also help people that looks for a non-application solution, but the usage of script. Cheers
I tried mysqldiff without success, so I would like to enrich the future readers by drawing attention to mysqlworkbench's compare function. http://dev.mysql.com/doc/workbench/en/wb-database-diff-report.html#c13030
if you open a model tab, and select the databases menu, you get a compare schemas option, which you can use to compare two different schemas on two different servers, or two schemas on the same server, or a schema and a model, or a lot of other options i haven't tried yet.
For mysql on Linux, it is possible via phpmyadmin to export the databases without 'data' and only structure.
Scrolling through the export options for the entire database, just deselect 'data' and set the output to text. Export both databases you wish to compare.
Then in file compare in your preferred program / site, compare the two text file outputs of the databases. Synchronization is still manual in this solution, but this is effective for comparing and finding the structural differences.
Depending on your database, the tools available vary.
I use Embarcadero's ER/Studio for this. It has a Compare and Merge feature.
There are plenty others, such as Toad for MySQL, that also have compare. Also agree on the Red-Gate suggestion, but never used it for MySQL.
Check out Gemini Delta - SQL Difference Manager for .NET. A free beta version is available for download, but the full version is only a few days away from public release.
It doesn't compare row-level data differences, but it compares tables, functions, sprocs, etc... and it is lightning fast. (The new version, 1.4, loads and compares 1k Sprocs in under 4 seconds, compared with other tools I've tested which took over 10 seconds.)
Everyone is right though, RedGate does make great tools.

Resources