Power Automate - Insert/ Updates rows in On-prem SQL server table - sql-server

My goal is to copy data from one SQL table to another SQL Table.
I'm trying to create a conditional Insert/Update statement in Power Automate based on Row ID. I have two tables right now with the same columns.
Source SQL Table
Destination SQL Table
I would like to update rows if Row ID already exists or create new if already not exists.
I tried Execute SQL query but this is not supported.(Known issues)
I tried "Transform data using Power Query to fetch rows from Source and Destination" and then had if condition to compare "Source.ProjectName = Dest.ProjectName" then its going into two Apply each conditions but still not creating items..

Nothing like searching for an answer to your specific problem and finding exactly one other person with the same issue, but no resolution.
Fortunately, I've managed to work out a decent solution for an SQL Upsert with an On-Premises SQL Connector in Power Automate.
Here's the general overview, I'll go through step-by-step after:
First step is to retrieve the single row by ID using Get row (V2).
The next step is to parse the JSON of the body of the previous call.
Here is the Schema that I used:
{
"type": "object",
"properties": {
"status": {
"type": "integer"
},
"message": {
"type": "string"
},
"error": {
"type": "object",
"properties": {
"message": {
"type": "string"
}
}
},
"source": {
"type": "string"
}
}
}
Now the key bit, hit Configure Run After for the Parse JSON action and have it run on both Success and Failure of the previous action.
Then we add a conditional that checks the status code of the Get Row action (as output by the Parse JSON action). If it failed with a 404 Status, we do an Insert. Otherwise, do an Update.
Hopefully this helps anyone else trying to work around the limitations of the On-Premises connector.

Related

Power automate populate excel

I want that when a contract is signed the following information is entered in an excel table: Agreement ID, Agreement Name, Creation Date, Status. And if the row already exists changes the status.
But it doesn’t really work, it creates several rows each time.
The foreach gives "False" to each operation while the line is existing
After reproducing from our end, we have observed that changing condition in the condition connector to below expression have updated the changes in excel, after using List rows present in a table connector before the condition connector. As it gets all the rows present in the table.
array(body('List_rows_present_in_a_table')?['value']?[0]?['Agreement ID'])
Here is my Logic App
Below is my excel initially
Result:

Copy tables from an on-prem database to an azure sql database

I am quite new to Logic App. I tried to copy tables from an on-prem database to an azure sql database. Everything is set up (Gateway,...). I tried to make a loop on the tables with the action Foreach but I get stucked with the Insert row (V2) action
When I run the Logic App, I obtain the error:
{
"status": 400,
"message": "A value must be provided for item. XXX,
"error": {
"message": "A value must be provided for item."
},
"source": "sql-we.azconn-we.p.azurewebsites.net"
}
I know from the Insert Row V2 that I have to add an item (Parameter: Row) but I don't know how to add it from the dynamic content because it only shows when I click on "Body":
How can I add the item?
Can you include your table schema and the logic app model you are using ? So I can answer more precisely.
I assume you have only one column named 'Body' in your on-prem DB table. Make sure to configure in a way that the result has multiple rows ( in previous steps "GetRowsV2" action)
The outputs of GetRowsv2 operation are dynamic. Usually the default parameter "Value" contains complete result as seen below.
When the result content has more than one occurrence, it is automatically considered to be put inside "ForEach" action.
The value that do not match the field data types are not listed under dynamic content.

Kafka Connect + JDBC Source connector + JDBC Sink connector + MSSQL SQL Server = IDENTITY_INSERT issues

I am trying to figure out why I am receiving an "IDENTITY_INSERT" error when trying to use a JDBC sink connector to sink data into a SQL Server database off of a topic that is also written to by a JDBC source connector connected to the same SQL Server database.
The overall objective:
Currently there is a SQL Server database being used by the backend for storage in a traditional sense, and we are trying to transition to using Kafka for all of the same purposes, however the SQL Server database must remain for the time being as there are services that still rely upon it, and we have a requirement that all data that is on Kafka be mirrored in the SQL Server database.
What I am trying to accomplish:
I am trying to create a setup in which there are the following:
One SQL Server database (all tables with identical primary key of "id" which auto-increments and is set by SQL Server)
Kafka cluster, including Kafka connect with:
JDBC Source connector to sync what is in the SQL Server table onto a kafka topic, lets call it AccountType for both the topic and the table
JD Sink connector that subscribes to the same topic AccountType and sinks data into the same AccountType table in the SQL Server Database
The expected behavior is:
if a legacy service writes/updates a record in SQL Server
the source connector will pick up the change and write it to it's corresponding Kafka topic
The sink connector will receive the message on that same topic, however, since the change originated in SQL Server and therefore has already been made from the perspective of the sink connector the sink connector will find a match on the primary key, see that there is no change to be made, and move along
if a new service, designed to work with Kafka, updates a record and writes it to the correct topic:
the JDBC sink connector will receive the message on the topic as an offset
since the sink connector was configured with upsert mode, it will find a match on the primary key in the target database and update the corresponding record in the target database
the source connector will then detect the change, triggering it to write the change to the corresponding topic
My assumption at this point is that one of two things will happen, either:
the source connector will not write to the topic since it would only be duplicating the last message or
the source connector will write the duplicate message to the topic, however it will be ignored by the sink as there would be no resulting database record changes
This expected behavior aligns with everything I have found in the documentation, and is as best as I can tell in accordance with the JDBC sink deep dive guide found here: https://rmoff.net/2021/03/12/kafka-connect-jdbc-sink-deep-dive-working-with-primary-keys/[kafka-connect-jdbc-sink-deep-dive-working-with-primary-keys][1]
What is happening instead:
with the Kafka cluster all started up, and the database empty, both connectors are successfully created
A row is inserted into a table in the database using an external service
the source connector successfully picks up the change and writes a record to the topic on Kafka (which has been split by the transforms such that the field representing the SQL Server table PK has been extracted and set as the message key, and removed from the value)
(the problem) the sink connector then receives the message on that topic and...
...here's the issue, based on several videos and examples I could find, nothing should happen since that record is already up to date in the database, instead, however, it immediately attempts to write the entire message as-is, to the target table resulting in the following:
java.sql.BatchUpdateException: Cannot insert explicit value for identity column in table 'AccountType' when IDENTITY_INSERT is set to OFF.
Which makes sense, since the message from the topic has a primary key field in it, if it isn't enabled in the table then it shouldn't be allowed. Just for fun, I tried throwing in an additional transform to remove the id field before trying to write, and instead using another field in the table that has a "Unique" constraint in the configuration. When I repeated the steps this time it did not complain about writing the primary key, however it did still immediately try and insert the record which resulted in another error since it would violate the unique constraint, which again makes perfect sense.
Where I am stuck:
If all of the above makes sense can anyone tell me why it is automatically trying to insert despite being set to upsert?
Notes:
all of this is being set up using docker containers provided by confluent for the confluent platform version 6.2.0
Source connector configuration:
{
"connection.url": "jdbc:sqlserver://mssql:1433;databaseName=REDACTED",
"connection.user":"REDACTED",
"connection.password":"REDACTED",
"connection.attempts": "3",
"connection.backoff.ms": "5000",
"table.whitelist": "AccountType",
"db.timezone": "UTC",
"name": "sql-server-source",
"connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector",
"dialect.name": "SqlServerDatabaseDialect",
"config.action.reload": "restart",
"topic.creation.enable": "false",
"tasks.max": "1",
"mode": "timestamp+incrementing",
"incrementing.column.name": "id",
"timestamp.column.name": "created, updated",
"validate.non.null": true,
"key.converter": "org.apache.kafka.connect.converters.LongConverter",
"value.converter": "io.confluent.connect.json.JsonSchemaConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"auto.register.schemas": "true",
"schema.registry.url": "http://schema-registry:8081",
"errors.log.include.messages": "true",
"transforms": "copyFieldToKey,extractKeyFromStruct,removeKeyFromValue",
"transforms.copyFieldToKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.copyFieldToKey.fields": "id",
"transforms.extractKeyFromStruct.type":
"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractKeyFromStruct.field": "id",
"transforms.removeKeyFromValue.type":
"org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.removeKeyFromValue.blacklist": "id",
"transforms": "copyFieldToKey,extractKeyFromStruct,removeKeyFromValue",
"transforms.copyFieldToKey.type": "org.apache.kafka.connect.transforms.ValueToKey",
"transforms.copyFieldToKey.fields": "id",
"transforms.extractKeyFromStruct.type":
"org.apache.kafka.connect.transforms.ExtractField$Key",
"transforms.extractKeyFromStruct.field": "id",
"transforms.removeKeyFromValue.type":
"org.apache.kafka.connect.transforms.ReplaceField$Value",
"transforms.removeKeyFromValue.blacklist": "id",
}
Sink connector configuration:
{
"connection.url": "jdbc:sqlserver://mssql:1433;databaseName=REDACTED",
"connection.user":"REDACTED",
"connection.password":"REDACTED",
"connection.attempts": "3",
"connection.backoff.ms": "5000",
"table.name.format": "${topic}",
"db.timezone": "UTC",
"name": "sql-server-sink",
"connector.class": "io.confluent.connect.jdbc.JdbcSinkConnector",
"dialect.name": "SqlServerDatabaseDialect",
"auto.create": "false",
"auto.evolve": "false",
"tasks.max": "1",
"batch.size": "1000",
"topics": "AccountType",
"key.converter": "org.apache.kafka.connect.converters.LongConverter",
"value.converter": "io.confluent.connect.json.JsonSchemaConverter",
"value.converter.schema.registry.url": "http://schema-registry:8081",
"insert.mode": "UPSERT",
"pk.mode": "record_key",
"pk.fields": "id",
}

Excel Query Won't Update

I have put a query that connects to SQL Server into an Excel spreadsheet (formatted as a table). I update this query daily. Today, when I tried to refresh the query, it generated an error and would not load new data.
The error says:
"This won't work because it would move cells in a table on your worksheet."
All the research I've done on this error seems to relate to users attempting to insert rows, but that's not what I'm doing.
My query will only return two rows, plus the headers. It's not possible for the query to return anything more, so cells shouldn't move in the table.
Any suggestions to solve this error?

How do you delete all the documents from a Cloudant Database or even Delete the database in Bluemix

I have been working on the Bluemix demo templates and have filled up a Cloudant database with 12k documents. Now I want to start over as this is test data, and want to start with a clean database.
lots of documents
I found I could interactively delete documents 100 at a time, but I need to just start over with a clean database.
Is there a command or menu option for this?
I tried making a new database, which was easy, but i didn't find a way to delete the old one or then rename the new one.
thanks
john
You could perform a DELETE request to your database:
curl -X DELETE https://username:password#username.cloudant.com/yourdatabase
The answer indicates whether you succeeded.
As an alternative you can use the Dashboard (GUI). Once you are on the database panel and selected a single database, click the icon for settings and select Delete.
If you want to keep the existing database but remove the documents, you can use CouchDB's Bulk API.
Make a call to the all_docs end point to retrieve the current list of all document identifiers and revision. For each document retrieved, consisting the the _id and _rev fields, add a _deleted field with value true.
Now send a HTTP POST to the _bulk_docs end point for the database, with the updated documents list.
POST /database/_bulk_docs HTTP/1.1
{
"docs": [
{
"_id": "some_document_id",
"_rev": "1-6a466d5dfda05e613ba97bd737829d67",
"_deleted": true
}
...
]
}
Full details on the Bulk API are here.
Deleted documents will remain in your database until you run the purge command.
If you want to do this within to dashboard, select the database you want then select the settings cog by the DB name. You'll see a Delete option.

Resources