Not able copy data from ADLS gen2 to SQL Server using ADF - sql-server

I am not able to copy data from ADLS gen2 to SQL Server (its not Azure SQL) using ADF.
What I have done is like this:
Created Data Set: Adls gen2 dataset Src
SQL Server DataSet tgt
But it doesn't allow me to choose tgt as my sink, though it lists down to choose the sink if the data set is either from (Azure SQL or Data Lake).

You will have to create an Integration Runtime and configure the same in your SQL Server Linked Service in ADF.
SQL Server is supported as sink, you can find the details here
As SQL Server is a different compute environment than Azure, you will have to create IR (Integration Runtime) so that Azure and SQL Server can communicate with each other.
Integration Runtime

If you want create on-premise SQL Server as dataset, you must install the Self-hosted integration manually:
A self-hosted integration runtime can run copy activities between a
cloud data store and a data store in a private network. It also can
dispatch transform activities against compute resources in an
on-premises network or an Azure virtual network. The installation of
a self-hosted integration runtime needs an on-premises machine or a
virtual machine inside a private network.
If you're using Data Flow, Data Flow doesn't support self-hosted integration so that we can't use SQL Server as connector:
You must use Copy active instead.
HTH.

Related

How to connect to an on-premises SQL Server from an Azure ML notebook?

I typically use pyodbc when running jupyter notebooks from my machine, but this does not work on Azure ML. My assumption is that this is being caused by Azure ML not knowing if I'm on my company's network as I typically need a VPN to the server if I'm not in office. The only solutions I can find online involve copying the data over on Azure Data Factory however I need to avoid this if possible as there are many tables I will need to experiment with, but nothing is intended to be long term and I'm unsure what I will even end up using.
Ideally there is a way to make pyodbc work but any other suggestions are welcome. I have researched integration runtimes but was unsure if that would solve my problem here.
The only solutions I can find online involve copying the data over on
Azure Data Factory however I need to avoid this if possible as there
are many tables I will need to experiment with, but nothing is
intended to be long term and I’m unsure what I will even end up using.
Ideally there is a way to make pyodbc work but any other suggestions
Unfortunately, the on-Prem SQL Server is not supported as a Data Source in Azure ML.
Only the Data sources available below are supported:-
Approach1)
You can copy your data from the on-premises SQL database to Azure SQL via copy tool in Azure Data factory and connect to Azure SQL via Azure Machine learning by directly connecting to it via Datasource like below:-
You can also use Self-hosted integration run time to connect to your SQL server on-prem in your data factory:-
Click on Option 2 to download the Integration runtime and set it in your local machine with the Registration keys mentioned above:-
Approach2)
If there’s a large data You can automate your entire copy process from the on-prem SQL server to Azure SQL by using the Azure DevOps pipeline.
References:-
https://learn.microsoft.com/en-us/answers/questions/775844/unable-to-connect-sql-server-to-azure-ml-pipeline By Ramr-msft
How To: Azure Data Factory CI/CD with Azure DevOps pipelines — The YAML WAY! | by Raghavendra Bharadwaj | Servian

Setup Azure SQL Managed Instance to connect to on premise server on a VPN

I have SSIS packages on my Azure SQL MI server that needs to have access to files on a server. The server needed to be accessed is on a VPN. Is it possible to add the MI to the VPN, so it can access the files on the server? Will it work?
The Managed Instance doesn't run your SSIS packages in Azure, it simply hosts them in a SSIS Catalog DB. Azure Data factory SSIS runtime will execute the packages. Therefore, you need to connect your Azure Data factory back to the on premises network to access the remote server. This can be done using an integration runtime proxy or you can join the SSIS integration runtime to a virtual network.
https://learn.microsoft.com/en-us/azure/data-factory/join-azure-ssis-integration-runtime-virtual-network

Migrate (on-prem) SQL data to Azure with Databricks (JDBC)

Is it possible to use the JDBC connector https://docs.databricks.com/data/data-sources/sql-databases.html in order to get data from local SQL server. (and export it to delta lake)
Using:
jdbcUrl = "jdbc:mysql://{0}:{1}/{2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
connectionProperties = {
"user" : jdbcUsername,
"password" : jdbcPassword,
"driver" : "com.mysql.jdbc.Driver"
}
Irrespective if you have MySql or SQL Server, Databricks driver supports both as outlined in the article you linked.
From the perspective of access to on-prem - the answer is yes, however Databricks must be able to connect to it. Usually this will mean deploying your Databricks clusters into your VNET which has access to your on-prem resources, e.g. following the guidance here
Alternatively you could use Azure Data Factory self-hosted integration runtime to move the data to a staging/"Bronze" storage in the cloud and pick it up with a Databricks task to move it to a Delta table.

To connect to an on premise database from MS Azure notebook via API call

I would like to connect to an on-premise database (say SQL Server) from Azure Databricks notebook, via REST API Call. Also, I would like to perform an UPSERT operation on a table in the database from the same.
Is it possible?
Kindly upload the necessary steps.
You can use JDBC to connect to an on-premise database (say SQL Server) from Azure Databricks notebook.
You could reference this Azure document: SQL Databases using JDBC:
This article covers how to use the DataFrame API to connect to SQL
databases using JDBC and how to control the parallelism of reads
through the JDBC interface. This article provides detailed examples
using the Scala API, with abbreviated Python and Spark SQL examples
at the end.
You also could reference the document Connect your Azure Databricks Workspace to your on-premises network #Sebastian Inones provided in the comment.
Ref this here: Connecting to on-prem SQL Server through Azure Databricks
Hope this helps.

Azure Data Factory from VM SQL Server to File on FTP

This year We moved from hosted servers to Azure VM's, we run two production servers (SQL and IIS). A vital component of our business is bulk transfer of data file. We take customers data from our SQL Server and then write it out to a file (XLS, CSV, XML, PDF, Word, etc.) and then either email these files to customers or in most cases, push them into their FTP server. We also have a few import procedures where we retrieve data files. All of this is currently done with SSIS packages.
We're examining a move to Azure Data Factory as a replacement for SSIS so that we can possibly move to either Azure SQL (if we can work out Broker Services limitations) or an Azure SQL Managed Instance.
I've done some preliminary work with ADF but I saw a couple of posts about lack of FTP support. Is it possible to create/deliver files to FTP and retrieve/consume files from FTP using ADF? Also, almost all of these jobs are automated and we use SQL Agent to run the packages. What is the Azure equivalent for scheduling these jobs to run?
There is automation in ADF but the scheduler is per pipeline. Azure Automation is more powerful and can automate more than one pipeline (Azure Data Factory v2), if needed.
Automation with Azure Data Factory (ADF)
You can receive files from FTP into an Azure Data Factory pipeline: Copy data from FTP server by using Azure Data Factory The idea is that you receive a file via FTP to submit to a particular pipeline activity, and that activity pushes data to an Azure data source. It might be possible to reverse the flow, and send data out.
The Azure SQL Database Managed Instance is the most on-premise like database (PaaS) service but SQL Server deployed on an Azure VM still has more functionality.

Resources