Storing UnUsed Dataabse and Tables for Sql 2008 - sql-server

We are looking to upgrade the sql server from 2008 to 2017 which has multiple databases from few years back and don't have idea that really databases are much in use and those tables into databases are used lately or not, if not then we can obsolete the database or tables which ones are not used anymore.
We would like to get the results of Unused Database and Tables then store into table (Ex. UnUsedDBAndTables) and run through Sql agent job for daily or every 3 days and need to update the
How we can implement so we can check and analysis this table (UnUsedDBAndTables) for period of time and make determination which ones are really not needed to migrate.
Thanks for your help!

Related

SQL Server 2012 - Synchronizing tables on different servers

Hi I need your assistance.
I have a table that I must synchronize in near real time.
There is however a couple of challenges that I'm sitting with:
The table (Table A) to by synchronized do not reside on the same server, but on two different servers. Server A (Production) and Server C (Development).
The table (Table A) on server A and server C do not have a primary key, so merge and transactional replication cannot work.
Setting triggers up on server A - the production server - is out of the question, this is a operation critical server.
Adding a primary key is also out of the question.
There is +/- 90 records being added to the table per minute.
Please can anyone assist?
Many thanks
You have ruled out so many options for any HA technologies to work..Only solution i could think of is
1.Write a custom solution in C#,which compares tables and does the necessary merge based on some column like date
2.Write the last written records into some table for the exe to start from there again
3.Trigger this through SQLServer Agent
You can also use AlwaysOn which provides readable secondaries as well,but i am not sure you be failing over production and development ..

Compare millions of records from Oracle to SQL server

I have an Oracle database and a SQL Server database. There is one table say Inventory which contains millions of rows in both database tables and it keeps growing.
I want to compare the Oracle table data with the SQL Server data to find out which records are missing in the SQL Server table on daily basis.
Which is best approach for this?
Create SSIS package.
Create Windows service.
I want to consume less resource to achieve this functionality which takes less time and less resource.
Eg : 18 millions records in oracle and 16/17 millions in SQL Server
This situation of two different database arise because two different application online and offline
EDIT : How about connecting SQL server from oracle through Oracle Gateway to SQL server to
1) Direct query to SQL server from Oracle to update missing record in SQL server for 1st time.
2) Create a trigger on Oracle which gets executed when record is deleted from Oracle and it insert deleted record in new oracle table.
3) Create SSIS package to map newly created oracle table with SQL server to update SQL server record.This way only few records have to process daily through SSIS.
What do you think of this approach ?
I would create an SSIS package and load the data from the Oracle table use a Data Flow / OLE DB Data Source. If you have SQL Enterprise, the Attunity Connectors are a bit faster.
Then I would load key from the SQL Server table into a Lookup transformation, where I would match the 2 sources on the key, and direct unmatched rows into a separate output.
Finally I would direct the unmatched rows output to a OLE DB Command, to update the SQL Server table.
This SSIS package will require a lot of memory, but as the matching is done in memory with minimal IO, it will probably outperform other solutions for speed. It will need enough free memory to cache all the keys from the SQL Server Table.
SSIS also has the advantage that it has lots of other transformation functions available if you need them later.
What you basically want to do is replication from Oracle to SQL Server.
You could do this in SSIS, A windows Service or indeed a multitude of platforms.
The real trick is using the correct design pattern.
There are two general design patterns
Snapshot Replication
You take all records from both systems and compare them somewhere (so far we have suggestions to compare in SSIS or compare on Oracle but not yet a suggestion to compare on SQL Server, although this is valid)
You are comparing 18 million records here so this is a lot of work
Differential replication
You record the changes in the publisher (i.e. Oracle) since the last replication then you apply those changes to the subscriber (i.e. SQL Server)
You can do this manually by implementing triggers and log tables on the Oracle side, then use a regular ETL process (SSIS, command line tools, text files, whatever), probably scheduled in SQL Agent to apply these to the SQL Server.
Or you could do this by using the out of the box replication capability to set up Oracle as a publisher and SQL as a subscriber: https://msdn.microsoft.com/en-us/library/ms151149(v=sql.105).aspx
You're going to have to try a few of these and see what works for you.
Given this objective:
I want to consume less resource to achieve this functionality which takes less time and less resource
transactional replication is far more efficient but complicated. For maintenance purposes, which platforms (.Net, SSIS, Python etc.) are you most comfortable with?
Other alternatives:
If you can use Oracle gateway for SQL Server then you do not need to transfer data and can make the query directly.
If you can't use Oracle gateway, you can use Pentaho data integration or another ETL tool to compare tables and get results. Is easy to use.
I think the best approach is using oracle gateway.Just follow the steps. I have similar type of experience.
Install and Configure Oracle Database Gateway for SQL Server.
https://docs.oracle.com/cd/B28359_01/gateways.111/b31042/installsql.htm
Now you can create a dblink from oracle to sql server.
Create a procedure which compare the missing records in oracle database and insert into sql server database.
For example, you can use this statement inside your procedure.
INSERT INTO "dbo"."sql_server_table"#dblink_name("column1","column2"...."column5")
VALUES
(
select column1,column2....column5 from oracle_table
minus
select "column1","column2"...."column5" from "dbo"."sql_server_table"#dblink_name
)
Create a scheduler which execute the procedure daily.
When both databases are online, missing records will be inserted to sql server. Otherwise the scheduler fail or you can execute the procedure manually.
It takes minimum resource.
I will suggest having a homemade ETL solution.
Schedule an oracle job to export source table data (on a daily
manner based on the application logic ) to plain CSV format.
Schedule a SQL-Server job (with acceptable delay from first oracle job) to read this CSV file and import it
to a medium table inside sql-servter using BULK INSERT.
Last part of the SQL-Server job will be reading medium table data
and do the logic(insert, update target table). I suggest having another table to store reports of this daily job result.

Compare database on Oracle and SQL Server

I am working on a project which migrates databases from Oracle 10g to SQL Server 2008 using SSMA(SQL SERVER MIGRATION ASSISTANT). I want to know if there is a way to actually compare the data in tables that resides on a table space say 'A' on ORACLE with the corresponding migrated database 'A' on SQL SERVER.
I am not bothered about the data types of various columns right now.If there is a way to map it then it will be great. I am just concerned with the data difference if any that exists.
Let me know if you are aware of any such free tool which does so, or any of you have written a tool which can help me out to do the same.
Thanks !!
You will have to map the PK from the source to the destination and if the colu,ns are the same, fetch a bulk and compare...
Lots of hard work.
Maybe it will be better if you could count rows and verify a statistic group of records.

SQL Server 2005 Auditing

Background
I have a production SQL Server 2005 server to which 4 different applications connect and make changes.
There are no foreign keys and in some cases no primary keys.
Unfortunately throwing the whole thing out and starting from scratch is not an option.
So my solution is to start migrating each of the applications to a service layer approach so that there is only one application directly connecting to the database.
However there are problems that need to be fixed before that service layer is written and all the applications are migrated over.
So rather than make changes and hope they don't break any one of the 4 badly written applications (with no way of quickly testing all functionality) my solution is to start auditing the database
Problem
How do I audit what stored procedures, tables, columns, views are being accessed/updated/called by each user on SQL Server 2005.
I can find out which tables are being updated but I have no idea which columns and by what users.
I also don't know if certain tables are being accessed only through stored procedures/views.
I know that SQL Server 2008 has better auditing features but if I could do this without spending money that would be great. That said if the best solution is to upgrade or buy software that's also an option.
Check out SQL Server 2008's CDC feature. You can't use this directly in 2005 but you can write a trigger for each table to log all data changes to a new audit table. i.e. you'd have an audit table for each table in your db, with all the same columns plus some additional columns saying what the operation was and when it occurred.
If the nature of your applications means you can get user information and/or application information from CURRENT_USER and APP_NAME() you could include that information in the audit table too.
And check out this answer for more goodness.

backup data for reporting

What is the best method to transfer data from sales table to sales history table in sql server 2005. sales history table will be used for reporting.
Take a look at SSAS. OLAP is built for reporting and is easy to query with tools like excel pivot tables.
Bulkcopy is fast and it will not use the transaction log. One batch run at the end of the day.
Deleting the copied records from your production server is a different situation that needs to be planed on that server's maintenance approach/plans. Your reporting server solution should not interfere with or affect the production server.
Keep in mind that your reporting server is not meant to be a backup of the data but rather a copy made exclusively for reporting purposes.
Also check on the server settings of your reporting server to be on Simple recovery model.
Most solutions will require 2 steps;
-copy the records from source to target
-delete records from source.
It is essential that your source table have a primary key.
The "best" method depends on a lot of things.
How many records?
Is this a production environment?
What tools do you have?
Unless you are moving a large amount of data, a simple stored procedure should do the trick.
A sql server job can manage the timing of when to call the proc.
if you just want to move the data to another table, use BulkCopy/BulkInsert. if you want to build reporting I would suggest a BI solution such as MS Analysis Service (OLAP).
It is difficult and in my opinion ugly to maintain two or more history/archive tables in the same database. For a reporting solution you will be considering all the tables for that piece of information anyway. History/Archive tables should only be used if you are going to put the data away and not touch it for a long period of time, ie. archive it away outside the operational DB.

Resources