How to update data azure SQL database using stream analytics? - sql-server

How to update or delete data in azure sql DB using azure stream analytics

Currently, Azure Stream Analytics (ASA) only supports inserting (appending) rows to SQL outputs (Azure SQL Databases, and Azure Synapse Analytics).
You should consider to use workarounds to enable UPDATE, UPSERT, or MERGE on SQL databases, with Azure Functions as the intermediary layer.
You can find more information about such workarounds in this MS article.

Firstly, we need to know what is Azure Stream Analytics.
An Azure Stream Analytics job consists of an input, query, and an output. Stream Analytics ingests data from Azure Event Hubs, Azure IoT Hub, or Azure Blob Storage. The query, which is based on SQL query language, can be used to easily filter, sort, aggregate, and join streaming data over a period of time. You can also extend this SQL language with JavaScript and C# user defined functions (UDFs). You can easily adjust the event ordering options and duration of time windows when preforming aggregation operations through simple language constructs and/or configurations.
Azure Stream Analytics now natively supports Azure SQL Database as a source of reference data input. Developers can author a query to extract the dataset from Azure SQL Database, and configure a refresh interval for scenarios that require slowly changing reference datasets.
That means that you can not insert or update data in azure sql DB using Azure Stream Analytics.
Azure Stream Analytics is not a database manage tool.
Hope this helps.

Related

How can I replicate an existing data warehouse on Azure?

I am new to Azure and have no prior experience or knowledge regarding working with Azure data warehouse systems (now Azure Synapse Analytics Framework)
I have access to a "read only" data warehouse (not in Azure) that looks like this:
I want to replicate this data warehouse as it is on Azure cloud. Can anyone point me to the right direction (video tutorials or documentation) and the number of steps involved in this process? There are around 40 databases in this warehouse. And what if I wanted to replicated only specific ones?
We can't do that you only have the read only permisson. No matter which data warehouse, we all need the server admin or database owner permission to do the database replicate.
You can easily get this from the all documents relate to the database backup/migrate/replicate, for example: https://learn.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql?view=sql-server-ver15#permissions,
If you have enough permission then you can to that. But for Azure SQL datawarehouse, now we called SQL pool (formerly SQL DW), we can't replicate other from on-premise datawarehouse to Azure directly.
The official document provide a way import the data into to Azure SQL pool((formerly SQL DW)):
Once your dedicated SQL pool is created, you can import big data with
simple PolyBase T-SQL queries, and then use the power of the
distributed query engine to run high-performance analytics.
You also could use other ETL tool to achieve the data migration from on-premise datawarehouse to Azure. For example using Data Factory, combine these two tutorials:
Copy data to and from SQL Server by using Azure Data Factory
Copy and transform data in Azure Synapse Analytics by using Azure
Data Factory

How to query "Daily Active Users" (Microsoft Dataverse Analytics) via SQL?

What's the backend database query of this Microsoft Dataverse Analytics dashboard?
I'm trying to workaround Dataverse analytics by accessing the transactional database behind that dashboard, I'm interested in getting Daily Active Users (DAU) shown above but via a SQL query and reading directly from the backend database.
It appears that the DB is this https://learn.microsoft.com/en-us/dynamics365/customer-engagement/web-api/entitytypes?view=dynamics-ce-odata-9 but I have not been able to comprehend the data model and I'm unable to find the tables to get DAU. Any thoughts?
Thanks
Basically you have to do everything what is MS doing in behind the scenes. CRM online is SaaS model and we don’t have access to Azure SQL server directly. But what you can do is, one of these options:
Use “Data export service” to replicate the data to your own Azure SQL server, then build Power BI on your own from the data
You can use REST Web API to pull the data and visualize (May not be so much flexible)
Based on your need and urgency, you may wait or use preview version of TDS endpoint, for read-only direct SQL access. Read more

Allow Data Push into an Azure SQL Database?

I'm relatively new to Azure and am having trouble finding what options are out there for connecting to an existing SQL database to push data into it.
The situation is that we have an external client who needs to connect to our Azure SQL database to push data into it, on an on-going basis. We can't give them permission to get into our database, so we're looking at what we can do allow data in. At this point the best option seems to be to create a web service deployed in Azure that will validate the data and then push it into our database.
The question I have is, are there other options to do this in an easier way? Are there Azure services or processes that can be set up to automatically process a file and pull the data into a database? Any other go-between options when each side has their own database and for security reasons can't just open up access to it?
Azure Data Factory works great for basic ETL. If neither party can grant direct access, you can use an intermediate repository like Blob Storage to drop csv/xml/json files for ingestion. If they'll grant you access to pull, you can setup a linked service that more or less functions the same as a linked server in MSSQL. As of the last release ADF now supports Azure hosted SSIS packages too.
I would do this via SSIS using SQL Studio Managemenet Studio (if it's a one time operation). If you plan to do this repeatedly, you could schedule the SSIS job to execute on schedule. SSIS will do bulk inserts using small batches so you shouldn't have transaction log issues and it should be efficient (because of bulk inserting). Before you do this insert though, you will probably want to consider your performance tier so you don't get major throttling by Azure and possible timeouts.

Load data from XML file to SQL database in Azure using Logic Apps

I'm new to Azure development, and I'm having trouble finding examples of what I want to do.
I have an XML file in Azure file storage and I want to use a Logic App to get that XML data into a SQL database.
I guess I will need to create a "SQL Database" in Azure, before the Logic App can be written (correct?).
Assuming that I have some destination SQL database, are there Logic App connectors/triggers/whatever that I can use to: 1) recognize that a file has been uploaded to Azure, and 2) process that XML to go into a database?
If so, can such connectors/triggers/whatevers be configured/written so that any business rules I have, for massaging the data between the XML and the database, can be specified?
Thanks!
Yes you are right you need to create the db and then write logicapps to perform necessary functionality.
There are lot of connectors with trigger like blob storage, Sql connector etc...
You can perform your processing with the help of "Enterprise Connectors" or you can do custom processing using "AzureFunctions" which integrate with logic apps.
In order to perform CRUD operations on an Azure SQL Database, you can use the SQL Connector. Documentation on the connector can be found here:
Logic App SQL Connector
Adding SQL Connector to a Logic App
I've also written a blog myself on how to use the SQL Connector to perform Bulk operations using a stored procedure and OpenJSON : Bulk insert into SQL
This might help you in designing your Logic App if you choose to use a stored procedure.

Azure Search from existing database

I have an existing SQL Server database that uses Full Text Search and Semantic search for the UI's primary searching capability. The tables used in the search contain around 1 million rows of data.
I'm looking at using Azure Search to replace this, however my database relies upon the Full Text Enabled tables for it's core functionality. I'd like to use Azure Search for the "searching" but still have my current table structure in place to be able to edit records and display the detail record when something has been found.
My thoughts to implement this is to:
Create the Azure indexes
Push all of the searchable data from the Full Text enabled table in SQL Server to Azure Search
Azure Search to return ID's of documents that match the search criteria
Query the existing database to fetch the rows that contain those ID's to display on the front end
When some data in the existing database changes, schedule an update in Azure Search to ensure the data stays in sync
Is this a good approach? How do hybrid implementations work where your existing data is in an on-prem database but you want to take advantage of Azure Search?
Overall, your approach seems reasonable. A couple of pointers that might be useful:
Azure SQL now has support for Full Text Search, so if moving to Azure SQL is an option for you and you still want to use Azure Search, you can use Azure SQL indexer. Or you can run SQL Server on IaaS VMs and configure the indexer using the instructions here.
With on-prem SQL Server, you might be able to use Azure Data Factory sink for Azure Search to sync data.
I actually just went through this process, almost exactly. Instead of SQL Server, we are using a different backend data store.
Foremost, we wrote an application to sync all existing data. Pretty simple.
For new documents being added, we made the choice to sync to Azure Search synchronously rather than async. We made this choice because we measured excellent performance when adding to and updating the index. 50-200 ms response time and no failures over hundreds of thousands of records. We couldn't justify the additional cost of building and maintaining workers, durable queues, etc. Caveat: Our web service is located in the same Azure region as the Azure Search instance. If your SQL Server is on-prem, you could experience longer latencies.
We ended up storing about 80% of each record in Azure Search. Obviously, the more you store in Azure Search, the less likely you'll have to perform a worst-case serial "double query."

Resources