Azure Logic App delete Row is not working - sql-server

I have a table that has a composite primary key.
CONSTRAINT [PK_FileContainerFiles] PRIMARY KEY CLUSTERED
(
[FileId] ASC,
[ContainerId] ASC
)
I am trying to delete the row using logic app connector. It works if the primarykey is having one element.
How to input two identifiers in 'RowId' of Logic app. when I tried something like below, Am getting error. Is this a Microsoft logic App Issue? Any Idea. Please help.

Yes, it is possible. The SQL Connector (which, btw is the same connector used in flows as well as LogicApps and PowerApps) treats primary keys just like SQL. That is, you simply use each key in sequence separated by a comma to construct the "full" key.
My example using composite key:
#{join(createArray(items('For_each')?['BUKRS'],items('For_each')?['LIFNR']),',')}
TLDR: values separated by comma.

The Row Id stands for the unique identifier of the row you wish to delete.
So if you would like to delete a row based on those 2 input parameters, you would first need to find a way to return you the Row Id (unique identifier) of the row(s) you'd like to delete and then execute the Delete row for each of the returned rows.
Another way would be to use a stored procedure to handle the deletion of the rows.
For Reference:
https://learn.microsoft.com/en-us/connectors/sql/

Another working solution is to use instead the "Execute Query" action and do a delete with all the conditions you need.
Sorry for answering such an old post but I had the same issue and found it so I think other people may find it useful too.

Related

Adding a primary key for Entity Framework to an existing column in a View based on a table where every column is Allow Null

There are a lot of questions asking this but I can't seem to find one with the specific solution.
I have a 3rd party database where every field is set to allow null. One of the columns ("Code") is a unique string ID and it is distinct.
Using entity framework I'd like to add this table, by telling EF to treat the column "Code" as a primary key.
I've created a view but I am not sure where to go from here.
I've seen some solutions that involve adding an extra row number to use as the primary key but I would prefer to use "Code" if possible.
Any ideas?
After some playing around I found a read-only solution
In the view I modify the column to be:
SELECT ISNULL(Code, -1) AS Code
Specifying ISNULL allows EF to infer a primary key. It is not ideal as I would like it to be writable as well. You get the message:
Error 6002: The table/view 'KittyCat.dbo.View_GetCatDetails' does not
have a primary key defined. The key has been inferred and the
definition was created as a read-only table/view.

Using 2 same values in 1 row -> SQL 2008 table

I've searched for this thingy a lot.. and I can not find a solution since I'm beginner in SQL itself.
I used to edit games database. now, I need to create a new table with 1 row called "CodeName128" and it should contain the same value many times..
when I place something like
CODE_NAME1
CODE_NAME1
it tells me No rows was updated blabla which means this table already have this code.
how can I get over it and enable the duplication in table?
Live example:
You must be having Primary key or Unique key defined on that column which is not allowing you to enter the duplicate values. The keys must be defined for a reason, so its not advisable to remove those, still if you think that duplicate values are required for that column, you have to alter the table structure and remove those constraints from that column.

cx_OracleTools CopyData.py - using without a PK constraint?

I'm attempiting to use cx_OracleTool's CopyData.py script to copy data between two tables on separate Oracle schemas/instances:
http://cx-oracletools.sourceforge.net/cx_OracleTools.html
When I run it against my tables, I get the error:
No primary or unique constraint found on table.
I don't know much about Oracle, to be honest, but from what I can tell the tables don't seem to have any PK constraint or anything like that defined.
The merits of this aside, I think it's simply been setup that way for expediency, and it's unlikely to change anytime nearterm.
Is there any way to get copyData.py to run in this scenario without a PK constraint?
Cheers,
Victor
The issue is that CopyData checks to see if the row exists in the destination table, and it can't do that without a unique key.
If it is acceptable to insert all rows and not update changed ones, use the --no-check-exists option. According to the code this will bypass the primary key check.
Otherwise, use the --key-columns=COLS option to manually specify the columns to be used as the unique key. This will also bypass the primary key check.

SQL Server String column as a unique key

I am using a SQL server table to keep related information with a URL . So there is a Column as "URL" and its type is VARCHAR . In the application we have to use this URL as a unique key to query for information (We are using something like SELECT * FROM Table WHERE URL = "www.google.com\ig")
Is there any disadvantages or known drawbacks in using a URL as a unique key?
Usually it is a better idea to have a numeric value rather than a string as a table key. See a discussion about this subject here: Database Primary Key C# mapping - String or int.
As for using a URL, it should not pose you any problem provided that you have some basic rules to avoid inserting the same (equivalent) URL twice. That is, the database will interpret "www.google.com" and "http://www.google.com" as different strings, so you should have a rule like "URLs will never have the protocol identifier" or "URLs will never end with a slash", or whatever makes sense for your design.
As the others have said - I would definitely not use a long string like an URL as the primary/clustering key on a SQL Server table - but of course, you should feel free to put a unique constraint on that column, to make sure you don't get any duplicates!
You can either do a UNIQUE CONSTRAINT or a UNIQUE INDEX - the end result is pretty much the same (the unique constraint will also be using an index behind the scenes). The plus side for a UNIQUE INDEX is that you can reference it as a foreign key in a separate table, so I almost always use that approach:
CREATE UNIQUE NONCLUSTERED INDEX UIX_YourTable_URL ON dbo.YourTable(urlField)
If you should ever try to insert a value that's already in the table, that insert statement will be rejected with a SQL error and nothing bad can happen.
I would still create a clustered column key on the table e.g. An Auto number and then create a Unique Index on the URL column.
However I cant see why a URL is not unique and all should work as is.
You may benefit from numeric primary key if paging is needed. But still you can add numeric indexer in future. So there's no obstacle to make URL a PK.

Preventing Duplicate Inserts Into SQL With PHP

I'm going to running thousands of queries into SQL and I need to prevent the duplication of field 'domain'. Never had to do this before and any help would be appreciated.
You probably want to create a "UNIQUE" constraint on the field "Domain" - this constraint will raise an error if you create two rows that have the same domain in the database. For an explanation, see this tutorial in W3C school -
http://www.w3schools.com/sql/sql_unique.asp
If this doesn't solve your problem, please clarify the database you have chosen to use (MySql?).
NOTE: This constraint is completely separate from your choice of PHP as a programming language, it is a SQL database definition thing. A huge advantage of expressing this constraint in SQL is that you can trust the database to preserve the constraint even when people import / export data from the database, your application is buggy or another application shares the database.
If this is an absolute database integrity requirement (It's not likely to change, nor does existing data have this problem), I would enforce it at the database with a unique constraint.
As far as detecting it before or after the attempt in order to notify the user, there are a number of techniques which could be used.
Where is the data coming from? Is this something you only want to run once, or a couple of times, or often? If the domain-value already exists, do you just want to skip the insert or do something else (ie increment a counter)?
Depending on your answers, there are many possible solutions:
Pre-sort your data, eliminate duplicates, then insert
(assumes relatively static data, empty table to begin with)
Use an associative array in PHP as a local domain-value cache
(if table already contains data, start by reading existing content;
not thread-safe, but works if it only runs once at a time)
Make domain a UNIQUE column and write wrapper code to handle return errors
Make domain a UNIQUE or PRIMARY KEY column and use an ON DUPLICATE KEY clause:
INSERT INTO mydata ( domain, count ) VALUES
( 'firstdomain', 1 ),
( 'seconddomain', 1 ),
( 'thirddomain', 1 )
ON DUPLICATE KEY
UPDATE count = count+1
Insert all data into the table, then remove duplicates
Note that batching inserts (ie using multiple value clauses per statement) can be significantly faster.
I'm not really sure I understood your question, but perhaps you are looking for SQL's "UNIQUE" constraint. If the query tries to insert a pre-existing value to a field, you (PHP) will be notified about this constraint breach.
There are a bunch of ways to approach this. You could set a unique constraint (like a primary key) on that column. This will cause the insert to fail if that domain has also been inserted. You could also insert all of the duplicate domains and just delete them later on. This will work well if not that many of the domains are duplicated. There are a few questions posted already on finding duplicate rows.
This can be doen with sql, rather than with php.
i am assuming that you are using MySQl, but the same principles will work with different databases.
make the Domain column the primary key. (makes sense, as it has to unique.)
Rather than using INSERT, use UPDATE.
if the primary key already exists (that you are trying to put into the table), update will update the existing tuple, rather than creating a new tuple.
so you will overwrite existing data if it is different, and if it is identical the update will be skipped.

Resources