We have N tables on Oracle server and we wanted to load all those tables from Oracle to SQL server. We are creating dynamic SSIS packages for same which will take the Oracle ServerName, DB name, schema name, tables list etc. and will load all these tables to SQL server. We have added Link Server on SQL Server (SSMS) for Oracle.
But we are not getting the efficient way to do the same. How we can achieve this in a single SSIS package. How we can handle metadata of Oracle tables and creating the same on SQL server ? This SSIS package should create tables dynamically on SQL server as well , for this we tried Temp table in SSIS package.
Since you have to do it with a large number of tables, I'd write a pl/sql procedure something, built around something like this:
declare
v_sql varchar2(1024);
begin
for x in (select owner, table_name from dba_tables where .....)
v_sql := 'created table '||
table_name ||
'#mssql a select * from '||
x.owner || '.' || x.table_name || ';';
exec immediate v_sql;
end loop;
end;
/
or, if you want to look it over before launching, use sql to write sql. In sqlplus:
set echo off feedback off verify off trimsp on pages 0
spool doit.sql
select 'create table '||
table_name ||
'#mssql as select * from '||
owner || '.' || table_name || ';'
from dba_tables
where .....
;
spool off
then check the spooled sql file for any issues before running.
All code above is off the top of my head. There may be minor syntax issues.
Related
I am using UPDATE OPENQUERY for a DB2 I Series Linked Server to a SQL Server 2012 instance.
Some DB2 tables are faster using the IBMDA400 driver and some DB2 tables are faster using the IBMDASQL driver.
Has anyone ever encountered this problem before.
All code is similar to the following:
UPDATE OPENQUERY(DB2, 'SELECT col1, col2 FROM schema.table WHERE A_TYPE = ''N'' ')
SET A_TYPE = 'Y'
Thank you
Try using a pure passthrough query in all cases, eg
exec( 'update schema.TableWHschema set A_TYPE = ''Y'' Table ''N'' ') at DB2
I am loading data through ODI into snowflake temp tables created with c$ needs to be dropped after load successful,how to drop those temp tables appreciate your suggestion
If you still need this, I wrote a stored procedure that will take a list of SQL generated dynamically and execute the lines one at a time. You can use it to run any list of generated SQL statements resulting from a select query, including dropping all tables matching a pattern such as c$%. First, here's the stored procedure:
create or replace procedure RunBatchSQL(sqlCommand String)
returns string
language JavaScript
as
$$
/**
* Stored procedure to execute multiple SQL statements generated from a SQL query
* Note that this procedure will always use the column named "SQL_COMMAND"
*
* #param {String} sqlCommand: The SQL query to run to generate one or more SQL commands
* #return {String}: A string containing all the SQL commands executed, each separated by a newline.
*/
cmd1_dict = {sqlText: SQLCOMMAND};
stmt = snowflake.createStatement(cmd1_dict);
rs = stmt.execute();
var s = '';
while (rs.next()) {
cmd2_dict = {sqlText: rs.getColumnValue("SQL_COMMAND")};
stmtEx = snowflake.createStatement(cmd2_dict);
stmtEx.execute();
s += rs.getColumnValue(1) + "\n";
}
return s;
$$
You can use this stored procedure to run any dynamically generated SQL statements in batch using the following script. Run the topmost query and it will be obvious what running the stored procedure with that query test as the parameter will do:
-- This is a select query that will generate a list of SQL commands to execute in batch.
-- This SQL will generate rows to drop all tables starting with c$. With minor edits
-- you could limit it to a specific database or schema.
select 'drop table ' || TABLE_CATALOG || '.' || TABLE_SCHEMA || '.' || "TABLE_NAME" as SQL_COMMAND
FROM INFORMATION_SCHEMA.TABLES
WHERE TABLE_NAME like 'c$%';
-- As a convenience, this grabs the last SQL run so that it's easier to insert into
-- the parameter used to call the stored procedure.
set query_text = ( select QUERY_TEXT
from table(information_schema.query_history(result_limit => 2))
where SESSION_ID = Current_Session() and QUERY_TYPE = 'SELECT' order by START_TIME desc);
-- Confirm that the query_text variable has the correct SQL query to generate our SQL commands (grants in this case) to run.
select $query_text;
-- Run the stored procedure. Note that to view its output better, double click on the output to see it in multi-line format,
Call RunBatchSQL($query_text);
--Check the last several queries run to make sure it worked.
select QUERY_TEXT
from table(information_schema.query_history(result_limit => 100))
where SESSION_ID = Current_Session() order by START_TIME desc;
The C$ prefixed work tables are a product of ODI use, but they are not created as actual Snowflake temporary tables so they do not benefit from an automatic deletion at JDBC session termination.
The ODI publishers note this about their C$ and I$ work tables:
When a scenario successfully completes, it will automatically delete these tables, as they're transitory and are no longer required. However, where a scenario does not complete successfully, it is possible these tables get left behind and from time to time it maybe desirable to clean up these tables to reclaim space.
For unsuccessful scenarios in your use of ODI that is likely leading to leftover tables on Snowflake, following the link above should help you run a procedure that deletes the leftover work tables (manually or on a schedule). Copying over the relevant procedure here for convenience:
To run the procedure:
Open ODI Studio and connect to the BI Apps ODI Repository.
Navigate to the Designer tab and use the navigator to navigate to: BI Apps Project -> Components -> DW -> Oracle -> Clean Work and Flow Tables folder
In the folder find the Clean Work and Flow Tables package and in this package is the UTILITIES_CLEAN_WORK_AND_FLOW_TABLES scenario.
Right click the scenario and select the 'Execute' option. At the prompt, provide the desired number of days to go back before deleting tables
We have two databases one in SQL Server & one in DB2, we have a scenario where we do some data inserts & data updates and deletes in SQL Server & at the same time we also do data inserts updates & deletes in Db2.
We sync data back & forth using some processes, whenever there is a change from SQL Server we sync data to db2 for insert, update & delete, if we have a change in db2 we sync data to SQL Server, we use IBM MQ messages which we dequeue the messages to sync the changes back and forth.
Everything was good until we had some issues of data sync from Db2 to SQL Server, one of our process was down which sync from db2 to SQL Server, so there is an on demand job that runs every night that will do full data refresh from Db2 to SQL Server but we are only doing Merge Update & insert, we are not doing delete as data which is yet to be synced to db2 is also present in SQL Server, so we cannot directly delete as both databases can have more or less records, so data on SQL Server some of them are left orphan, we have a scoping so data which is getting updated in SQL Server cannot be change in db2 and vice versa.
My question is when we are syncing from Db2 to SQL Server, how to identify records that got deleted from db2 only so that we can delete those from SQL Server, we don't want to delete records that are created in SQL Server but yet to be sent to db2, we have 114 tables and we cannot maintain a flag if that is an option to differentiate.
When you said you are synchronizing data back and forth between MS SQL Server and DB2 Server, how are you capturing the changes? If using some CDC tool (IDR, GoldenGate, Informatica), these tools allow you to detect conflicts so you can decide what records to keep or delete.
If you are capturing your changes by an in-house development (triggers or your own log scraper ), you should keep at least the operation type and timestamp in your temporary change data set, so that you can recognize the operation.
If you are comparing the tables and deal with changes, you won't be able to recognize if missing columns at DB2 side represents rows deleted on DB2 side or rows added to SQL side... But you can fix that, by developing a proper change data capture mechanism.
Change tracking on the sql server side might be a viable option (as long as all the tables you would like to sync/"delete from" have a primary key).
With CT you could track which rows, for each table, were created at the sql server side
since the last sync from sql server to db2. Those rows should not be deleted yet:
DELETE
FROM SQL_SERVER_TABLE
WHERE
NOT EXISTS(SELECT * FROM CHANGETABLE())
AND NOT EXISTS(SELECT * FROM DB2_staging)
I would connect SQL to DB2 via linked servers (more there : https://learn.microsoft.com/fr-fr/sql/relational-databases/system-stored-procedures/sp-addlinkedserver-transact-sql?view=sql-server-ver15) and then do queries to find out which record are missing on both sides.
This can be accomplished with OPENQUERY. You can do something like that :
SELECT * FROM YourSqlTable
EXCEPT
SELECT * FROM OPENQUERY(YOURDB2SERVER, 'SELECT * FROM YourDB2Table')
And then the same thing inverted :
SELECT * FROM OPENQUERY(YOURDB2SERVER, 'SELECT * FROM YourDB2Table')
EXCEPT
SELECT * FROM YourSqlTable
You can then send the records on the right server .
If you have a lot of tables to compare you can write these queries with dynamic SQL
DECLARE #TABLENAME nvarchar(200);
DECLARE TABLE_CUR CURSOR FOR
SELECT TABLE_NAME FROM YourDatabaseName.INFORMATION_SCHEMA.TABLES;
OPEN TABLE_CUR
FETCH NEXT FROM TABLE_CUR INTO #TABLENAME;
WHILE ##FETCH_STATUS = 0
BEGIN
DECLARE #Query nvarchar(MAX);
SET #Query = 'SELECT * FROM OPENQUERY(YOURDB2SERVER, ''SELECT *
FROM '+ #TABLENAME + ' '')
EXCEPT
SELECT * FROM '+ #TABLENAME
-- Don't forget the double '' for openquery
EXEC sp_executeSQL #Query;
SET #Query = 'SELECT * FROM '+ #TABLENAME + '
EXCEPT
SELECT * FROM OPENQUERY(YOURDB2SERVER, ''SELECT *
FROM '+ #TABLENAME + ' '')'
-- Don't forget the double '' for openquery
EXEC sp_executeSQL #Query;
END
CLOSE TABLE_CUR;
DEALLOCATE TABLE_CUR;
Thanks for the suggestions, I am not using CDC, but maintaining changes in a LOG table which are yet to be synced to DB2.
DELETE TGT
FROM [IGP].[LocationType] AS TGT
INNER JOIN #locationType SRC ON
TGT.[LocationTypeCode] = SRC.[LocationTypeCode];
I am first inserting the log table data that are yet to be synced to DB2 into #locationType temp table and delete them from IGP(staging Db2 master data) so the updates & Deletes won't be overridden from IGP staging table data which is Db2 master data.
Now I need to take care of inserts that don't exists in Db2 and there in SQL server but it's not synced from the log table I shouldn't be deleting them as it would be data loss, so I use below merge query
MERGE INTO [dbo].[LocationType] AS TGT
USING [IGP].[LocationType] AS SRC
ON TGT.[LocationTypeCode] = SRC.[LocationTypeCode]
WHEN MATCHED AND (EXISTS
(SELECT TGT.[Description] EXCEPT SELECT SRC.[Description]))
THEN
UPDATE SET TGT.[LocationTypeCode] = SRC.[LocationTypeCode],
TGT.[Description] = SRC.[Description]
WHEN NOT MATCHED THEN
INSERT([LocationTypeCode], [Description])
VALUES([LocationTypeCode], [Description])
WHEN NOT MATCHED BY SOURCE
AND (EXISTS (SELECT TGT.[LocationTypeCode]
EXCEPT SELECT [LocationTypeCode] FROM #locationType)) THEN DELETE;
I have a database server that some databases with restricted users are in use in the database. I need to restrict users to can't change .MDF and .LDF autogrowth settings. Please guide me to restrict the users.
I think there is two way to get this access:
Disable autogrowth in databases
Limit the maximum size of MDF and LDF
But I couldn't find any option in Management Studio to do them server wide and also get access from users.
Thanks.
you can execute following ALTER DATABASE command which sets auto growth option to off for all databases using undocumented stored procedure sp_Msforeachdb
for single database (Parallel Data Warehouse instances only)
ALTER DATABASE [database_name] SET AUTOGROW = OFF
for all databases
EXEC sp_Msforeachdb "ALTER DATABASE [?] SET AUTOGROW = OFF"
Although this is not a server variable or instance settings, it might help you ease your task for updating all databases on the SQL Server instance
By excluding system databases and for all other databases, following T-SQL can be executed to get list of all database files and output commands prepared can be executed
select
'ALTER DATABASE [' + db_name(database_id) + '] MODIFY FILE ( NAME = N''' + name + ''', FILEGROWTH = 0)'
from sys.master_files
where database_id > 4
To prevent data files' autogrow property to be changed, I prepared below SQL Server DDL trigger once I used a DDL trigger for logging DROP table statements.
Following trigger will also prevent you to change this property, so if you need to update this property, you have to drop this trigger first.
CREATE TRIGGER prevent_filegrowth
ON ALL SERVER
FOR ALTER_DATABASE
AS
declare #SqlCommand nvarchar(max)
set #SqlCommand = ( SELECT EVENTDATA().value('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','nvarchar(max)') );
if( isnull(charindex('FILEGROWTH', #SqlCommand), 0) > 0 )
begin
RAISERROR ('FILEGROWTH property cannot be altered', 16, 1)
ROLLBACK
end
GO
For more on DDL Triggers, please refer to Microsoft Docs
I added a trigger to the table to copy the inserted data to an audit table.
I got all the column names of the table from INFORMATION_SCHEMA.
I used "SELECT * INTO #INSERTED FROM INSERTED" to copy inserted data to a temporary table.
Then used the following dynamic query to get the data from temporary table for each column.
SET #sqlText = N'SELECT ' + #ColName + ' FROM #INSERTED'
where #ColName is the column name.
It was working fine with sql server 2008.
Now we moved to sql azure. select into is not supported in sql azure. I cannot create a temporary table and then use insert on it, as my table contains over 70 columns and also, I cannot use INSERTED table for a dynamic query.
So, please suggest any solution\workaround for it.
SQL Azure V11 doesn't support select into. Please upgrade your server to SQL DB v12 and you should be able to do this.