Is there any option in SQL server 2016 to change column order still, without creating a temp table, and recreating/reinserting whole table? Link below is for 2005.
We have a 500 Million row table in data warehouse. We want to insert a column in the middle. Either we can recreate the table, or utilize 300+ views for all our tables, which have similar situation. The view becomes another meta data presentation layer we have to manage. Wish sql server is smart enough to change column orders easier like Aurora or Postgresql.
How to change column order in a table using sql query in sql server 2005?
Is there any option in SQL server 2016 to change column order still, without creating a temp table, and recreating/reinserting whole table?
No. Column ordinals in SQL Server control the visible order of the columns and the physical layout of the data.
I have to migrate SQL Sever 2008 database to the SQL Server 2012. 2008 is the enterprise version and 2012 is standard version. As we know, standard version does not support table partitioning.
The table which is partitioned in the enterprise version has 1 clustered and around 8 non-clustered indexed. I need to drop this partition but do not know how. Can someone please shed little light on how should I go about it?
Thanks.
To unpartition a table, you'll need to recreate all the indexes with a filegroup specification instead of parttion scheme. I suggest you drop all the non-clustered indexes and then rebuild the existing partitioned clustered index using CREATE INDEX...WITH(DROP_EXISTING-ON) with a filegroup specification. Then recreate the non-clustered indexes with a filegroup specfied.
I need to migrate about 700 Oracle partitioned tables (RANGE and LIST partitioning) to SQL Server.
Turns out the SSMA (SQL Server Migration Assistant) does not handle Oracle partitioned tables (this is the official answer I got from Microsoft).
Any tool / script / other suggestion to automate this process?
Thanks!
They are correct:
Tried to do this for a project last year for work and found out the same thing:
Tried doing a little research on google to see if things have changed but found out the following:
Migration of Oracle Partitioned Tables is not supported by SSMA. Partitioned tables are migrated as a Non-partitioned simple tables.
Partitioning of the these Tables in SQL server is required to be done manually as per the physical database architecture planning and logical drives of the server system.
Any partition maintenance (adding or dropping or truncating the partitions) related code need to be re-rewritten in SQL Server."
I have an Oracle database and a SQL Server database. There is one table say Inventory which contains millions of rows in both database tables and it keeps growing.
I want to compare the Oracle table data with the SQL Server data to find out which records are missing in the SQL Server table on daily basis.
Which is best approach for this?
Create SSIS package.
Create Windows service.
I want to consume less resource to achieve this functionality which takes less time and less resource.
Eg : 18 millions records in oracle and 16/17 millions in SQL Server
This situation of two different database arise because two different application online and offline
EDIT : How about connecting SQL server from oracle through Oracle Gateway to SQL server to
1) Direct query to SQL server from Oracle to update missing record in SQL server for 1st time.
2) Create a trigger on Oracle which gets executed when record is deleted from Oracle and it insert deleted record in new oracle table.
3) Create SSIS package to map newly created oracle table with SQL server to update SQL server record.This way only few records have to process daily through SSIS.
What do you think of this approach ?
I would create an SSIS package and load the data from the Oracle table use a Data Flow / OLE DB Data Source. If you have SQL Enterprise, the Attunity Connectors are a bit faster.
Then I would load key from the SQL Server table into a Lookup transformation, where I would match the 2 sources on the key, and direct unmatched rows into a separate output.
Finally I would direct the unmatched rows output to a OLE DB Command, to update the SQL Server table.
This SSIS package will require a lot of memory, but as the matching is done in memory with minimal IO, it will probably outperform other solutions for speed. It will need enough free memory to cache all the keys from the SQL Server Table.
SSIS also has the advantage that it has lots of other transformation functions available if you need them later.
What you basically want to do is replication from Oracle to SQL Server.
You could do this in SSIS, A windows Service or indeed a multitude of platforms.
The real trick is using the correct design pattern.
There are two general design patterns
Snapshot Replication
You take all records from both systems and compare them somewhere (so far we have suggestions to compare in SSIS or compare on Oracle but not yet a suggestion to compare on SQL Server, although this is valid)
You are comparing 18 million records here so this is a lot of work
Differential replication
You record the changes in the publisher (i.e. Oracle) since the last replication then you apply those changes to the subscriber (i.e. SQL Server)
You can do this manually by implementing triggers and log tables on the Oracle side, then use a regular ETL process (SSIS, command line tools, text files, whatever), probably scheduled in SQL Agent to apply these to the SQL Server.
Or you could do this by using the out of the box replication capability to set up Oracle as a publisher and SQL as a subscriber: https://msdn.microsoft.com/en-us/library/ms151149(v=sql.105).aspx
You're going to have to try a few of these and see what works for you.
Given this objective:
I want to consume less resource to achieve this functionality which takes less time and less resource
transactional replication is far more efficient but complicated. For maintenance purposes, which platforms (.Net, SSIS, Python etc.) are you most comfortable with?
Other alternatives:
If you can use Oracle gateway for SQL Server then you do not need to transfer data and can make the query directly.
If you can't use Oracle gateway, you can use Pentaho data integration or another ETL tool to compare tables and get results. Is easy to use.
I think the best approach is using oracle gateway.Just follow the steps. I have similar type of experience.
Install and Configure Oracle Database Gateway for SQL Server.
https://docs.oracle.com/cd/B28359_01/gateways.111/b31042/installsql.htm
Now you can create a dblink from oracle to sql server.
Create a procedure which compare the missing records in oracle database and insert into sql server database.
For example, you can use this statement inside your procedure.
INSERT INTO "dbo"."sql_server_table"#dblink_name("column1","column2"...."column5")
VALUES
(
select column1,column2....column5 from oracle_table
minus
select "column1","column2"...."column5" from "dbo"."sql_server_table"#dblink_name
)
Create a scheduler which execute the procedure daily.
When both databases are online, missing records will be inserted to sql server. Otherwise the scheduler fail or you can execute the procedure manually.
It takes minimum resource.
I will suggest having a homemade ETL solution.
Schedule an oracle job to export source table data (on a daily
manner based on the application logic ) to plain CSV format.
Schedule a SQL-Server job (with acceptable delay from first oracle job) to read this CSV file and import it
to a medium table inside sql-servter using BULK INSERT.
Last part of the SQL-Server job will be reading medium table data
and do the logic(insert, update target table). I suggest having another table to store reports of this daily job result.
I have a table (myTable) in which I have a field flagged as being a Filestream, on this server it is the only filestream and it saves to the filestream location of F:\foo
SELECT COUNT(1) FROM myTable results in 37,314 but the folder properties of F:\foo are 36,358 files. All the rows in myTable have data in the Filestream column, does that mean 956 were complete duplicates?
If so, how does SQL Server determine what is and what is not a duplicate (is it a complete binary compare? as I don't think it would be worth SQL Storing data at a block-differential level)? As I can't seem to find any information SQL Server consolidating duplicate records for filestreams.
Additionally when I re-save many of the same records again (making the count say 45,000) the total files in F:\foo increase which to me indicates that the duplicate checking (if there is any such thing) is not perfect.
Does SQL Server consolidate similar files in filestreams together or not? Is there a stored procedure that can be executed to cause SQL to re-scan the filestream filegroup and look for further duplicates to consolidate existing space?
Server in question is SQL Server 2012 Enterprise with SP1 but has also happened on our UAT SQL Server 2012 Standard Edition with SP1 box.