Migrate Oracle partitioned tables to SQL Server - sql-server

I need to migrate about 700 Oracle partitioned tables (RANGE and LIST partitioning) to SQL Server.
Turns out the SSMA (SQL Server Migration Assistant) does not handle Oracle partitioned tables (this is the official answer I got from Microsoft).
Any tool / script / other suggestion to automate this process?
Thanks!

They are correct:
Tried to do this for a project last year for work and found out the same thing:
Tried doing a little research on google to see if things have changed but found out the following:
Migration of Oracle Partitioned Tables is not supported by SSMA. Partitioned tables are migrated as a Non-partitioned simple tables.
Partitioning of the these Tables in SQL server is required to be done manually as per the physical database architecture planning and logical drives of the server system.
Any partition maintenance (adding or dropping or truncating the partitions) related code need to be re-rewritten in SQL Server."

Related

Storing UnUsed Dataabse and Tables for Sql 2008

We are looking to upgrade the sql server from 2008 to 2017 which has multiple databases from few years back and don't have idea that really databases are much in use and those tables into databases are used lately or not, if not then we can obsolete the database or tables which ones are not used anymore.
We would like to get the results of Unused Database and Tables then store into table (Ex. UnUsedDBAndTables) and run through Sql agent job for daily or every 3 days and need to update the
How we can implement so we can check and analysis this table (UnUsedDBAndTables) for period of time and make determination which ones are really not needed to migrate.
Thanks for your help!

SQL Server 2012 - Synchronizing tables on different servers

Hi I need your assistance.
I have a table that I must synchronize in near real time.
There is however a couple of challenges that I'm sitting with:
The table (Table A) to by synchronized do not reside on the same server, but on two different servers. Server A (Production) and Server C (Development).
The table (Table A) on server A and server C do not have a primary key, so merge and transactional replication cannot work.
Setting triggers up on server A - the production server - is out of the question, this is a operation critical server.
Adding a primary key is also out of the question.
There is +/- 90 records being added to the table per minute.
Please can anyone assist?
Many thanks
You have ruled out so many options for any HA technologies to work..Only solution i could think of is
1.Write a custom solution in C#,which compares tables and does the necessary merge based on some column like date
2.Write the last written records into some table for the exe to start from there again
3.Trigger this through SQLServer Agent
You can also use AlwaysOn which provides readable secondaries as well,but i am not sure you be failing over production and development ..

Compare millions of records from Oracle to SQL server

I have an Oracle database and a SQL Server database. There is one table say Inventory which contains millions of rows in both database tables and it keeps growing.
I want to compare the Oracle table data with the SQL Server data to find out which records are missing in the SQL Server table on daily basis.
Which is best approach for this?
Create SSIS package.
Create Windows service.
I want to consume less resource to achieve this functionality which takes less time and less resource.
Eg : 18 millions records in oracle and 16/17 millions in SQL Server
This situation of two different database arise because two different application online and offline
EDIT : How about connecting SQL server from oracle through Oracle Gateway to SQL server to
1) Direct query to SQL server from Oracle to update missing record in SQL server for 1st time.
2) Create a trigger on Oracle which gets executed when record is deleted from Oracle and it insert deleted record in new oracle table.
3) Create SSIS package to map newly created oracle table with SQL server to update SQL server record.This way only few records have to process daily through SSIS.
What do you think of this approach ?
I would create an SSIS package and load the data from the Oracle table use a Data Flow / OLE DB Data Source. If you have SQL Enterprise, the Attunity Connectors are a bit faster.
Then I would load key from the SQL Server table into a Lookup transformation, where I would match the 2 sources on the key, and direct unmatched rows into a separate output.
Finally I would direct the unmatched rows output to a OLE DB Command, to update the SQL Server table.
This SSIS package will require a lot of memory, but as the matching is done in memory with minimal IO, it will probably outperform other solutions for speed. It will need enough free memory to cache all the keys from the SQL Server Table.
SSIS also has the advantage that it has lots of other transformation functions available if you need them later.
What you basically want to do is replication from Oracle to SQL Server.
You could do this in SSIS, A windows Service or indeed a multitude of platforms.
The real trick is using the correct design pattern.
There are two general design patterns
Snapshot Replication
You take all records from both systems and compare them somewhere (so far we have suggestions to compare in SSIS or compare on Oracle but not yet a suggestion to compare on SQL Server, although this is valid)
You are comparing 18 million records here so this is a lot of work
Differential replication
You record the changes in the publisher (i.e. Oracle) since the last replication then you apply those changes to the subscriber (i.e. SQL Server)
You can do this manually by implementing triggers and log tables on the Oracle side, then use a regular ETL process (SSIS, command line tools, text files, whatever), probably scheduled in SQL Agent to apply these to the SQL Server.
Or you could do this by using the out of the box replication capability to set up Oracle as a publisher and SQL as a subscriber: https://msdn.microsoft.com/en-us/library/ms151149(v=sql.105).aspx
You're going to have to try a few of these and see what works for you.
Given this objective:
I want to consume less resource to achieve this functionality which takes less time and less resource
transactional replication is far more efficient but complicated. For maintenance purposes, which platforms (.Net, SSIS, Python etc.) are you most comfortable with?
Other alternatives:
If you can use Oracle gateway for SQL Server then you do not need to transfer data and can make the query directly.
If you can't use Oracle gateway, you can use Pentaho data integration or another ETL tool to compare tables and get results. Is easy to use.
I think the best approach is using oracle gateway.Just follow the steps. I have similar type of experience.
Install and Configure Oracle Database Gateway for SQL Server.
https://docs.oracle.com/cd/B28359_01/gateways.111/b31042/installsql.htm
Now you can create a dblink from oracle to sql server.
Create a procedure which compare the missing records in oracle database and insert into sql server database.
For example, you can use this statement inside your procedure.
INSERT INTO "dbo"."sql_server_table"#dblink_name("column1","column2"...."column5")
VALUES
(
select column1,column2....column5 from oracle_table
minus
select "column1","column2"...."column5" from "dbo"."sql_server_table"#dblink_name
)
Create a scheduler which execute the procedure daily.
When both databases are online, missing records will be inserted to sql server. Otherwise the scheduler fail or you can execute the procedure manually.
It takes minimum resource.
I will suggest having a homemade ETL solution.
Schedule an oracle job to export source table data (on a daily
manner based on the application logic ) to plain CSV format.
Schedule a SQL-Server job (with acceptable delay from first oracle job) to read this CSV file and import it
to a medium table inside sql-servter using BULK INSERT.
Last part of the SQL-Server job will be reading medium table data
and do the logic(insert, update target table). I suggest having another table to store reports of this daily job result.

Database Design and relation diagrams details

We have a production SQL Server and my desktop has SQL Server 2008 R2 Management Studio software installed. I have recently been given a task to perform data mining on our server DBs.
We have around 100 or more of tables there and it is getting very difficult for me to see how tables are related or has been created.
For a particular scenario I have cornered to 3 tables amongst the 100's that we have - but I cannot formulate how these tables are related with each other. I mean if only I know that one's table column is PK / FK of other then only I can execute something like below to extract data's -
SELECT *
FROM tablea,tableb
WHERE tableb.id = tablea.id
and do data mining on the result data set.
Please let me know how can I get all the tables and it relation details? What tool I can use such that further on information like above can be extracted or database designs can be known?
I tried to create the DB diagram but it showed me below error:
Do I need to install any other tool?
Below is my MS SQL Studio version details:
I think your solution is to use a database diagram (https://msdn.microsoft.com/en-us/library/ms189078.aspx)
Just drag all tables on the screen and it will show you the relations, this of course only when the primary-keys/foreign-keys are there.
For the error you are getting:
if I google that for you I get:
The backend version is not supported to design database diagrams or tables
The answer marked as the solution is:
This is commonly reported as an error due to using the wrong version
of SMSS. Use the version designed for your database version. You can
use select ##version to check which version of sql server you are
actually using

Is there an elegant way to track the modification of all columns of one table in SQL Server 2008

There is a table in my database containing 100 columns. I want to create a trigger to audit the modification for every update operation.
What I can think is to create the update clause for all columns but they are all similar scripts. So is there any elegant way to do that?
Check Change Data Capture
Update
CDC provides tracking of all details of changes. Available since SQL Server 2008.
(Change data capture is available only on the Enterprise, Developer, and Evaluation editions of SQL Server.
Source: http://msdn.microsoft.com/en-us/library/bb522489.aspx)
More lightweight solution is Change Tracking (Sync Framework), the one code4life mentioned before, available since SQL Server 2005.
Update2:
Related questions (with a lot of sublinks):
History tables pros, cons and gotchas - using triggers, sproc or at application level
History tables pros, cons and gotchas - using triggers, sproc or at application level
Suggestions for implementing audit tables in SQL Server?
Suggestions for implementing audit tables in SQL Server?
Are soft deletes a good idea?
Are soft deletes a good idea?
How do I version my MS SQL database in SVN?
Versioning SQL Server database
Thomas LaRock. SQL Server Audit: Magic without a Wizard
http://www.simple-talk.com/sql/database-administration/sql-server-audit-magic-without-a-wizard/
There's this resource on MSDN which you might find helpful:
Tracking Changes in the Server Database (including SQL Server 2008)
I'm not sure if you're using SQL Server 2008 though.
Code generation?
Have you looked at the techniques which http://autoaudit.codeplex.com/ uses?
Theoretically, you can use 1 trigger and check COLUMNS_UPDATED() to know which columns has changed.
(not be tested)
See more here

Resources