I want to enter data from mutiple T-SQL queries into my azure sql database, We want to enter data in such a way so that we have 8 columns in a single table in azure sql database, and for those 8 columns we have multiple T-SQL statements that 1 for each that will enter the data from the select statments into the azure sql database, how can this be achieved, for long term we want this to run as a job going forward.
If your multiple T-SQL queries run in one database, I suggest you can think about the Azure Data Factory.
Azure Data Factory can help migrate data from one table or multiple tables to Azure SQL database by T-SQL queries.
You also can trigger the pipeline runs on a schedule. You can create a scheduler trigger to schedule the pipeline to run periodically (hourly, daily, and so on).
For details about Data factory, please see Azure Data Factory Documentation.
Tutorials:
Incrementally load data from multiple tables in SQL Server to an Azure SQL database
Copy multiple tables in bulk by using Azure Data Factory.
And if your source data is in SQL Server instance, you can create a linked server to Azure SQL Database, this also can help you achieve that.
You can query and insert data to linked Azure SQL server by T-SQL statements.
About SQL Server linked server, please see: Create Linked Servers (SQL Server Database Engine)
Hope this helps.
Related
We have an on-prem SQL Server DB (SQL Server 2017 Comp 140) that is about 1.2 TB. We need to do a repeatable migration of just the data to an on cloud SQL (Paas). The on-prem has procedures and functions that do cross DB queries which eliminates the Data Migration Assistant. Many of the tables that we need to migrate are system versioned tables (just to make this more fun). Ideally we would like to move the data into a different schema of a different DB so we can avoid the use of External tables (worried about performance).
Moving the data is just the first step as we also need to do an ETL job on the data to massage it into the new table structure.
We are looking at using ADF but it has trouble with versioned tables unless we turn them off first.
What are other options that we can look and try to be able to do this quickly and repeatedly? Do we need to change to IaaS or use a third party tool? Did we miss options in ADF to handle this?
If I summarize your requirements, you are not just migrating a database to cloud but a complete architecture of your SQL Server, which includes:
1.2 TB of data,
Continuous data migration afterwards,
Procedures and functions for cross DB queries,
Versioned tables
Point 1, 3, and 4 can be done easily by creating and exporting .bacpac file using SQL Server Management Studio (SSMS) from on premises to Azure Blob storage and then importing that file in Azure SQL Database. The .bacpac file that we create in SSMS allows us to include all version tables which we can import at destination database.
Follow this third-party tutorial by sqlshack to migrate data to Azure SQL Database.
The stored procedures can also be moved using SQL Scripts. Follow the below steps:
Go the server in Management Studio
Select the database, right click on it Go to Task.
Select Generate Scripts option under Task
Once its started select the desired stored procedures you want to copy
and create a file of them and then run script from that file to the Azure SQL DB which you can login in SSMS.
The repeatable migration of data is challenging part. You can try it with Change Data Capture (CDC) but I'm not sure that is what exactly your requirement. You can enable the CDC on database level using below command:
Use <databasename>;
EXEC sys.sp_cdc_enable_db;
Refer to know more - https://www.qlik.com/us/change-data-capture/cdc-change-data-capture#:~:text=Change%20data%20capture%20(CDC)%20refers,a%20downstream%20process%20or%20system.
I have to copy table data from one Azure SQL Database to another Azure SQL Database which are under same Azure server.
Is there any way to do this using Azure data factory? Also, this needs to be scheduled as a daily feed.
Edit : How can we add more tables to the existing dataset ? I have created this for 3 tables, now i want to add two more tables to this, how ?
Did you have a look at Copy data to and from SQL Server by using Azure Data Factory?.
In Azure Data Factory, you can use the Copy activity to copy data
among data stores located on-premises and in the cloud. After you copy
the data, you can use other activities to further transform and
analyze it
You can have a look at the steps from here on how to configured a triggered pipeline.
One important thing to remember is that you'll have to define the data set (with or without schema) for all tables that require copy for any source-destination combination.
you can think of elastic queries(preview)-for cross database queries and elastic jobs (preview) for job scheduling.
Utilize Elastic query for bringing result from another database on the same server. Read more on Elastic Query. The advantage is it is coming as free with Azure SQL.
Elastic database query (preview) for Azure SQL Database allows you to
run T-SQL queries that span multiple databases using a single
connection point.
Schedule Elastic job(currently in preview) which can be used to schedule job in a Azure SQL database. Read more on Elastic jobs
Elastic Database Jobs (preview) are Job Scheduling services that
execute custom jobs on one or many Azure SQL Databases.
I'm new to SQL Server and trying to automatically update tables in SQL Server from tables in MS Access.
I have an Access database of metadata that must be kept updated for sending records to other groups. I also have a database in SQL Server which also has these same metadata tables. Currently these tables in the SQL Server database get updated manually by exporting the Access tables as Excel files, and then importing them into the SQL Server tables.
It's not the most efficient process and could lead to errors in the SQL Server database if someone forgets to check that they are using the most recent data from Access. So I would like to integrate some of the tables from Access to my database in SQL Server. Ideally I would like for the tables in my SQL Server database to be updated whenever Access is updated or at least update the tables automatically in the SQL Server database when I open it.
Would replicating the Access tables be the best? I am using SQL Server 2014 Developer so I think I have this capability. From my understanding, mirroring is for an entire database not just pieces of it. However, I do not want to be able to alter the metadata from SQL Server and have it reflected in Access. I cannot tell if reflecting the tables would do this...?
I also looked at this post about writing multiple insert statements but was confused (What is the best way to auto-generate INSERT statements for a SQL Server table?). Someone else suggested importing all the data into SQL Server and then using an ODBC driver to connect the two, but I'm also not sure how this would update the database in SQL Server anytime Access is updated.
If you have any suggestion and a link to easy to follow tutorial I would really appreciate it!
Thanks
In Access, go to 'External Data', ODBC Database, and connect to the SQL Server database directly - make sure you select 'Link to the data source by creating a linked table' on the first page of the wizard. Now, this linked table is available in Access, but is actually the SQL Server table.
Get rid of the local Access tables, using the new linked tables in their place in whatever queries, forms, reports, etc that you have in Access.
Now, any changes to the tables you see in this Access db ARE changes to the SQL Server database.
Fist of all sorry for my bad English.
I am new for azure.We are planning to move some selected tables from our SQL database to azure SQL database because of it getting to much load.But existing stored procedure have joined with these tables in SQL server. So what is the best solution to get a result from both databases.
For example booking table right now in Azure database. But customer details, office details, courier details are in our existing SQL database.
Updated
Initially, we have only one database in sql server which contains all tables booking, customer details, office details, courier details etc. Due to heavy load, the client has decided to move some of the tables from sql server to Azure. So we have moved booking related tables into Azure. The issue is the database contains many stored procedures joined between all these tables. If I move some tables to Azure this won't work. I know there are methods to link multiple sql server to write stored procedures by adding those databases as 'Linked Servers' and access through [Server Name].[Database Name].[Table Name]. I think the same is possible between two Azure Sql databases.
My question is this cross-database querying is possible between two databases one is situated in SQL server and other is in Azure.
Thank you.
Azure supports cross database queries if both databases are in Azure ..In your case,it seems some of will be in OnPremises..
So the only option,which i can think of is to use is linked servers to azure..these queries can perform worse,depending on the data you want from them..
In General,you have to follow below steps to create Linked server to AZure..
1.Run odbcad32.exe to setup a system DSN using SQL Server Native Client.
2.Now create a linked server..
EXEC master.dbo.sp_addlinkedserver
#server = N’Can be any name′,
#srvproduct=N’Any’,
#provider=N’MSDASQL’,
#datasrc=N’name of DSN you created′
Now you can query azure from your local server like below
select * from [#datasrc name(dsn name)],db.schema.table
this blog explains step by step and goes into some details on what are the pitfalls
https://blogs.msdn.microsoft.com/sqlcat/2011/03/07/linked-servers-to-sql-azure/
I have an Oracle database and a SQL Server database. There is one table say Inventory which contains millions of rows in both database tables and it keeps growing.
I want to compare the Oracle table data with the SQL Server data to find out which records are missing in the SQL Server table on daily basis.
Which is best approach for this?
Create SSIS package.
Create Windows service.
I want to consume less resource to achieve this functionality which takes less time and less resource.
Eg : 18 millions records in oracle and 16/17 millions in SQL Server
This situation of two different database arise because two different application online and offline
EDIT : How about connecting SQL server from oracle through Oracle Gateway to SQL server to
1) Direct query to SQL server from Oracle to update missing record in SQL server for 1st time.
2) Create a trigger on Oracle which gets executed when record is deleted from Oracle and it insert deleted record in new oracle table.
3) Create SSIS package to map newly created oracle table with SQL server to update SQL server record.This way only few records have to process daily through SSIS.
What do you think of this approach ?
I would create an SSIS package and load the data from the Oracle table use a Data Flow / OLE DB Data Source. If you have SQL Enterprise, the Attunity Connectors are a bit faster.
Then I would load key from the SQL Server table into a Lookup transformation, where I would match the 2 sources on the key, and direct unmatched rows into a separate output.
Finally I would direct the unmatched rows output to a OLE DB Command, to update the SQL Server table.
This SSIS package will require a lot of memory, but as the matching is done in memory with minimal IO, it will probably outperform other solutions for speed. It will need enough free memory to cache all the keys from the SQL Server Table.
SSIS also has the advantage that it has lots of other transformation functions available if you need them later.
What you basically want to do is replication from Oracle to SQL Server.
You could do this in SSIS, A windows Service or indeed a multitude of platforms.
The real trick is using the correct design pattern.
There are two general design patterns
Snapshot Replication
You take all records from both systems and compare them somewhere (so far we have suggestions to compare in SSIS or compare on Oracle but not yet a suggestion to compare on SQL Server, although this is valid)
You are comparing 18 million records here so this is a lot of work
Differential replication
You record the changes in the publisher (i.e. Oracle) since the last replication then you apply those changes to the subscriber (i.e. SQL Server)
You can do this manually by implementing triggers and log tables on the Oracle side, then use a regular ETL process (SSIS, command line tools, text files, whatever), probably scheduled in SQL Agent to apply these to the SQL Server.
Or you could do this by using the out of the box replication capability to set up Oracle as a publisher and SQL as a subscriber: https://msdn.microsoft.com/en-us/library/ms151149(v=sql.105).aspx
You're going to have to try a few of these and see what works for you.
Given this objective:
I want to consume less resource to achieve this functionality which takes less time and less resource
transactional replication is far more efficient but complicated. For maintenance purposes, which platforms (.Net, SSIS, Python etc.) are you most comfortable with?
Other alternatives:
If you can use Oracle gateway for SQL Server then you do not need to transfer data and can make the query directly.
If you can't use Oracle gateway, you can use Pentaho data integration or another ETL tool to compare tables and get results. Is easy to use.
I think the best approach is using oracle gateway.Just follow the steps. I have similar type of experience.
Install and Configure Oracle Database Gateway for SQL Server.
https://docs.oracle.com/cd/B28359_01/gateways.111/b31042/installsql.htm
Now you can create a dblink from oracle to sql server.
Create a procedure which compare the missing records in oracle database and insert into sql server database.
For example, you can use this statement inside your procedure.
INSERT INTO "dbo"."sql_server_table"#dblink_name("column1","column2"...."column5")
VALUES
(
select column1,column2....column5 from oracle_table
minus
select "column1","column2"...."column5" from "dbo"."sql_server_table"#dblink_name
)
Create a scheduler which execute the procedure daily.
When both databases are online, missing records will be inserted to sql server. Otherwise the scheduler fail or you can execute the procedure manually.
It takes minimum resource.
I will suggest having a homemade ETL solution.
Schedule an oracle job to export source table data (on a daily
manner based on the application logic ) to plain CSV format.
Schedule a SQL-Server job (with acceptable delay from first oracle job) to read this CSV file and import it
to a medium table inside sql-servter using BULK INSERT.
Last part of the SQL-Server job will be reading medium table data
and do the logic(insert, update target table). I suggest having another table to store reports of this daily job result.