I was wondering if there is a way to insert multiple rows to Azure SQL Database using the SQL connector in Logic App.
I've got a HTTP GET request which returns an array of results. I want to use the items of this array as data source for row inserts.
I know there is a way to iterate through arrays in Logic Apps using "foreach". I have tried this and the "Insert Row" action fails because "foreach is not supported in API connector action".
Is there a different way of achieving this?
We don't yet have a batch insert but it is on the backlog. However it's still possible by writing a stored procedure that takes an array. For example you could write a stored proc that takes an OPENJSON array and inserts one row per item
Related
The problem we are fancing is we have to make some inserts/updates into a certain database of one server based on the data from another server. The first idea that came to my mind was using linked servers, but it was rejected by the database management team (unfortunately, we were given no reasons for such a prohibition). I suggested to split the task into two SQL scripts, so we force the first one to print out the data we want to use from the first server, so it could be pasted into de second script por update the second server (it was also rejected).
For short: we have some data in a server A table T, and we want to query T in order to extract some data and insert it into a temporary table of server B without using linked servers. Once the data is inserted into the temporary table then we could write a T-SQL script for use this data to update some tables from server B. Is there any way to accomplish this?
Any ideas would be welcome.
Thanks in advance.
The solution that comes to my mind is, you can make a procedure to save the data in the csv file after each insert. Then, the second server can check the csv file with some conditions like (if it's not empty), insert it into the table in the second server, then run another query to make a csv file empty after inserting it to the second server. Then you can update the second server based on your requirements. I hope this solution helps you.
I want to do inserts in a sql server db with an asp.net core api.
The data I get includes 9 values and 4 of them are connected to other tables. Is it better to simply try the insert via ef core and catch the sql exception if some values are not in the tables or is it better to look for that before (what means more querys in one api request)?
If the data is invalid I only do one insert in another table.
The percentage of invalid data is about 5%.
You should check if the data from the other table already exists or not.
Also, depending on you DB models configuration, you might end up with duplicate items if you try to insert items that exist, without retrieving and updating them.
Scenario
Note: I am using SQL Server 2017 Enterprise
I am looping through a list of databases and copying data to them out of one database. This database will only be accessed by the script (no other transactions will be made against it from something else). The copy includes copying straight table to table, or will have more complex, longer-running queries or stored procedures. All of this is done with SQL Server jobs calling procedures; I'm not using anything like SSIS.
Question
Instead of looping through all the databases and running the statements one at a time, I want to be able to run them in parallel. Is there an easy way to do this?
Options I've thought of:
Run each data transfer as a job and then run all the jobs at once. From my understanding, they would be executed asynchronously, but I'm not 100% sure.
Generate the SQL statements and write a script outside of SQL Server (e.g. Powershell or Python) and run all the commands in parallel
Leverage SSIS
I prefer not to do this, since this would take too much work and I'm not very familiar with it. This may be used down the road though.
Use powershell...
Create a table on the central database to house instance / connection string details. (Remember to obfuscate for security)
Create another table to house the queries.
Create a third table to map Instance to Query.
In powershell create a collection / list based object. Deserialized from your data entries. The object will be made up of three properties {Source / Destination / Query}
Write a method / function to carry out the ETL based work. CONNECT TO DB, READ FROM SOURCE, WRITE TO DEST.
Iterate over the collection using Foreach-Parallel construct with your function nested within. This will initiate a new SPID based on the number of elements in the collection and pass those values into your function where the work will be carried out.
I am moving data within folder from Azure Data Lake to a SQL Server using Azure Data Factory (ADF).
The folder contains hundreds of .csv files. However, one inconsistent problem with these csv's is that some (not all) have a final row that contains a special character, which when trying to load to a sql table with datatypes other than NVARCHAR(MAX) will fail. To get around this, I have to first use ADF to load the data into staging tables where all columns are set to NVARCHAR(MAX), then I insert those rows that do not contain a special character into tables that have the appropriate data type.
This is a weekly process, and is over a terabyte of data and it takes forever to move the data so I am looking into ways to import into my final tables rather than having a staging component.
I notice that there is a 'pre-copy script' field that can execute before the load to sql server. I want to add code that will allow me to parse out special characters OR null rows before loading to sql server.
I am unsure of how to approach this since the csv's would not be stored in a table, so SQL code wouldn't work. Any guidance on how I can utilize the pre-copy script to clean my data before loading it into sql server?
The pre-copy script is a script that you run against the database before copying new data in, not to modify the data you are ingesting.
I already answered this on another question, providing a possible solution using an intermediate table: Pre-copy script in data factory or on the fly data processing
Hope this helped!
You could consider stored procedure. https://learn.microsoft.com/en-us/azure/data-factory/connector-azure-sql-database#invoking-stored-procedure-for-sql-sink
What I want to do is build a dynamic data pull from different SQL source servers (Server1,Server2,Server3) etc.
To pull down to dynamic locations on my SQL server (Dev,Prod) into databases (database1,database2,etc)
The tables will be dropped and recreated each time the package is run so that I am sure I match the source servers if they change anything on source (field names, datatypes, lengths, etc)
I will still get the data to extract. I want to pull this down using a single dataflow in a foreach loop.
I have a table that has all the server names and tables and databases in it and
I want to loop through that table and pull all the rows of tables inside down to my server (server1.database1.table_x,server5.database3.table_y,etc) So that I don't have to build a new data flow for each table.
In order to do this I have already built the foreach loop with a sql task that is dumping results into an object. Then the foreach loop takes that object that has 7 different fields (Source_Server_Name,Source_Server_Type_Driver,Source_Database,Source_Table,Source_Where_Clause,Source_Connection_String,then destination stuff) and it puts each of those fields into a different String variable for use inside the loop.
I can change the Connections dynamically using the variables but I can't figure out how to get the column mapping in the dataflow to function,
Is there some kind of script task I can use to edit the backend XML that will create the column mapping for me so the metadata does not error out? Any help would be greatly appreciated :-)
This is the best illustrated example I could find of what I am doing just remember I need to have a different metadata setup for each table I pull down to my server.
http://sql-bi-dev.blogspot.com/2010/07/dynamic-database-connection-using-ssis.html
The solution I ended up using is BIML which generates the package on the fly using dynamic sql and BIML. Not pretty but it works :-)
I have heard that it is possible to dynamically generate and publish the packages but I would never go this route. I have done something similar using c# code which can be run from an application via sql agent or from inside an SSIS package script task.
If you try this approach look into SqlConnection and SqlCommand. Then write code to build the sql statements dynamically.
For example create table statements using ExecuteNonQuery(), use datareader to pipe in input and pass that reader to SqlBulkCopy to write to the destination.