database changed outside of DataSet: duplicate key exception - database

I'm using a strongly typed DataSet and I fill all Tables of the DataSet at the beginning of the program. I have one table that can be changed by multiple users. Sometimes it happens that this table is changed outside of the dataSet, so that the user does not have the newest version of the table and gets an Exception when trying to insert the same row.
my procedure:
When a user wants to insert a new row I check the dataset for the existence of the row and only insert the row if it is not already there. But in the meantime (between filling and checking) it happens that another user inserts the same row into the database. So I get the SQLException: Violation of PRIMARY KEY constraint (error 2627).
How can I avoid this?
Thats how I check if the row is already part of database:
customersTableAdapter custAdapter = new customersTableAdapter();
AzureDataSet.customersRow custRow= azureDataSet.customers.FindBycustID(hash);
if(custRow==null)
{
try
{
custRow= azureDataSet.customers.NewcustomersRow();
custRow.custID = hash;
azureDataSetcustomers.AddcustomersRow(custRow);
retryPolicy.ExecuteAction(() =>
{
customersTableAdapter.Update(azureDataSet.customers);
});
}
catch
{
//check for error 2627??
}
}
return custRow.custID;

The problem is that TableAdapters do not provide a data synchronisation.
I wrote a stored procedure to insert the data and checked in the stored procedure if the value is already there.
I also catched a duplicate key exception which migth still occur, because of the delay between the check and the insert operation.

Related

Entity Framework Insert Into Table With AFTER INSERT Trigger

I am working on a Web API and Entity Framework 6 that is doing a "bulk" insert of under 500 records at any given time to a Microsoft SQL Server table. The DbContext.SaveChanges() method will insert all the records into a table in a couple seconds, so have no issues with that. However, when the method is called to insert the same number of records into the same table with a semi-extensive trigger attached to it, the process can take many minutes. The trigger has some calls to table joins and inserts into other tables and then deletes the newly inserted record.
I do not have much control of the table or the trigger, so I am looking for suggestions on how to improve performance. I made a suggestion to move the trigger to a stored procedure and have the trigger call the stored procedure, but I am uncertain if that will achieve any gains.
EDIT: As I understand my question was kind of generic, I will post some of my code in case it helps. The SQL is not mine, so I will see what I can actually post.
Here is the part of my Web API method that does the call to SaveChanges():
string[] stringArray = results[0].Split(new[] { "\r\n", "\r", "\n" }, StringSplitOptions.None);
var profileObjs = db.Set<T_ProfileStaging>();
foreach (var s in stringArray)
{
string[] columns = s.Split(new[] {",", "\t"}, StringSplitOptions.None);
if (columns.Length == 6)
{
T_ProfileStaging profileObj = new T_ProfileStaging();
profileObj.CompanyCode = columns[0];
profileObj.SubmittedBy = columns[1];
profileObj.VersionName = columns[2];
profileObj.DMName = columns[3];
profileObj.Zone = columns[4];
profileObj.DMCode = columns[5];
profileObj.ProfileName = columns[6];
profileObj.Advertiser = columns[7];
profileObj.OriginalInsertDate = columns[8];
profileObjs.Add(profileObj);
}
}
try
{
db.SaveChanges();
return Ok();
}
catch (Exception e)
{
return Content(HttpStatusCode.BadRequest, "SQL Server Insert Exception");
}
When you load with SaveChanges() EF will send each row in a separate INSERT statement. So if you have a statement trigger, it will run for each row.
To work around this you need either
use a bulk load API from the client (instead of EF's SaveChanges()) using SqlBulkCopy directly, or one of the many EF extensions that wrap it.
or
Configure EF to insert into a different table and then INSERT ... SELECT into the target table

Missing Insert Record on the table with Trigger (Oracle DB)

The scenario is the following -
OrderTable with Columns "OrderId" and "OrderType"
OrderRelationTable with Columns "OrderId" and "CustId"
OrderProcessTable with Columns "OrderId", "OrderType", "CustId", and "ProcessFlag"
The flow goes like this-
Application1 creates the record in OrderTable -> Then pass the record to Application2 by using MQ protocol, Application 2 in this case insert/create the record passed in the OrderRelationTable -> Then a trigger is called in Oracle DB to create the record in OrderProcessTable
Problem
Sometimes the record is not inserted into table 3 OrderProcessTable. Not sure if it is cause by timing or there is something that is not correct with the trigger?
Application1 Code
boolean updated = false;
/** JDBC prepare statement execution insert into OrderTable in Java**/
int rowCount = ps.executeUpdate();
if(rowCount>0){
updated=true;
}
log.log("updated flag:"+updated);
/** I am able to see the log shows the flag is true, and recored inserted into OrderTable **/
Application2 Code
This doesn't really matter much assuming that it is some Java JDBC code that does the insert into OrderRelationTable and it is successful.
The Trigger
Assuming the syntax is correct.
CREATE OR REPLACE TRIGGER INSERTINTOOrderProcessTable
AFTER INSERT ON OrderRelationTable
FOR EACH ROW DECLEAR
v_order_type := null;
BEGIN
SELECT OrderType INTO v_order_type FROM OrderTable
WHERE OrderId = :new.OrderId
AND OrderType IS NOT NULL
AND rownum=1;
IF v_order_type IS NOT NULL THEN
INSERT INTO OrderProcessTable VALUES (:new.OrderId, v_order_type, :new.CustId, 'N');
END IF;
END;
Questions -
After the Application 1 Code is executed is guaranteed DB will have the OrderTable record avaliable for SELECT statement? (Assume that updated flag is true)
Is there a timing issue with the app code and trigger? for example when trigger calls the SELECT statement from OrderTable? (of course the order id is matching in the OrderRelationTable and OrderTable)
Basically right now my problem is that sometimes (rarely) the record is not inserted into OrderProcessTable via the trigger even though it should (Order Type is not null)? Any idea?
There's no timing issue, as far as I can tell.
As of trigger code: what is the purpose of and rownum = 1 condition? I'm not saying that it is wrong, I'm just asking. Do you expect several rows to be returned by that query? If so, is that a legal situation? Wouldn't you rather handle it with the WHEN TOO_MANY_ROWS exception handler (i.e. instead of using the ROWNUM condition)?
What happens if SELECT returns nothing? It raises then NO_DATA_FOUND exception and trigger fails and certainly doesn't insert anything. Is it propagated so that someone (human being) or something (error logging procedure) sees / catches it so that you'd know that something went wrong?
And, of course, the fact that V_ORDER_TYPE remains NULL which causes INSERT to fail (as P. Salmon already suggested).

Avoid Adding Duplicate Records

I m trying to write if statement to give error message if user try to add existing ID number.When i try to enter existing id i get error .untill here it s ok but when i type another id no and fill the fields(name,adress etc) it doesnt go to database.
METHOD add_employee.
DATA: IT_EMP TYPE TABLE OF ZEMPLOYEE_20.
DATA:WA_EMP TYPE ZEMPLOYEE_20.
Data: l_count type i value '2'.
SELECT * FROM ZEMPLOYEE_20 INTO TABLE IT_EMP.
LOOP AT IT_EMP INTO WA_EMP.
IF wa_emp-EMPLOYEE_ID eq pa_id.
l_count = l_count * '0'.
else.
l_count = l_count * '1'.
endif.
endloop.
If l_count eq '2'.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
APPEND wa_emp TO it_emp.
INSERT ZEMPLOYEE_20 FROM TABLE it_emp.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Record Added Successfully.'.
elseif l_count eq '0'.
CALL FUNCTION 'POPUP_TO_DISPLAY_TEXT'
EXPORTING
TITEL = 'INFO'
TEXTLINE1 = 'Selected ID already in database.Please type another ID no.'.
ENDIF.
ENDMETHOD.
I'm not sure I'm getting your explanation. Why are you trying to re-insert all the existing entries back into the table? You're just trying to insert C_ID etc if it doesn't exist yet? Why do you need all the existing entries for that?
If so, throw out that select and the loop completely, you don't need it. You have a few options...
Just read the table with your single entry
SELECT SINGLE * FROM ztable INTO wa WITH KEY ID = C_ID etc.
IF SY-SUBRC = 0.
"this entry exists. popup!
ENDIF.
Use a modify statement
This will overwrite duplicate entries with new data (so non key fields may change this way), it wont fail. No need for a popup.
MODIFY ztable FROM wa.
Catch the SQL exceptions instead of making it dump
If the update fails because of an exception, you can always catch it and deal with exceptional situations.
TRY .
INSERT ztable FROM wa.
CATCH sapsql_array_insert_duprec.
"do your popup, the update failed because of duplicate records
ENDTRY.
I think there's a bug when appending in internal table 'IT_EMP' and inserting in 'ZEMPLOYEE_20' table.
Suppose you append the first time and then you insert. But when you append the second time you will have 2 records in 'IT_EMP' that are going to be inserted in 'ZEMPLOYEE_20'. That is because you don't refresh or clear the internal table and there you will have a runtime error.
According to SAP documentation on 'Inserting Lines into Tables ':
Inserting Several Lines
To insert several lines into a database table, use the following:
INSERT FROM TABLE [ACCEPTING DUPLICATE KEYS] . This
writes all lines of the internal table to the database table in
one single operation. The same rules apply to the line type of
as to the work area described above. If the system is able to
insert all of the lines from the internal table, SY-SUBRC is set to 0.
If one or more lines cannot be inserted because the database already
contains a line with the same primary key, a runtime error occurs.
Maybe the right direction here is trying to insert the work area directly but before you must check if record already exists using the primary key.
Check the SAP documentation on this issue clicking the link before.
On the other hand, once l_count is zero because of l_count = l_count * '0'. that value will never change to any other number making that you won't append or insert again.
why are you retrieving all entries from zemployee_20 ?
You can directly check wether the 'id' exists already or not by using select single. If exists, then show message, if not, add.
It is recommended to retrieve only one field when its needed and not the entire table with asterisc *
SELECT single employee_id FROM ZEMPLOYEE_20 where employee_id = p_id INTO v_id. ( or field in structure )
if sy-subrc = 0. "exists
"show message
else. "not existing id
"populate structure and then add record to Z table
endif.
Furthermore, l_count is not only unnecessary but also bad implemented.
You can directly use the insert query,if the sy-subrc is unsuccessful raise the error message.
WA_EMP-EMPLOYEE_ID = C_ID.
WA_EMP-EMPLOYEE_NAME = C_NAME.
WA_EMP-EMPLOYEE_ADDRESS = C_ADD.
WA_EMP-EMPLOYEE_SALARY = C_SAL.
WA_EMP-EMPLOYEE_TYPE = C_TYPE.
INSERT ZEMPLOYEE_20 FROM WA_EMP.
If sy-subrc <> 0.
Raise the Exception.
Endif.

Correct method of deleting over 2100 rows (by ID) with Dapper

I am trying to use Dapper support my data access for my server app.
My server app has another application that drops records into my database at a rate of 400 per minute.
My app pulls them out in batches, processes them, and then deletes them from the database.
Since data continues to flow into the database while I am processing, I don't have a good way to say delete from myTable where allProcessed = true.
However, I do know the PK value of the rows to delete. So I want to do a delete from myTable where Id in #listToDelete
Problem is that if my server goes down for even 6 mintues, then I have over 2100 rows to delete.
Since Dapper takes my #listToDelete and turns each one into a parameter, my call to delete fails. (Causing my data purging to get even further behind.)
What is the best way to deal with this in Dapper?
NOTES:
I have looked at Tabled Valued Parameters but from what I can see, they are not very performant. This piece of my architecture is the bottle neck of my system and I need to be very very fast.
One option is to create a temp table on the server and then use the bulk load facility to upload all the IDs into that table at once. Then use a join, EXISTS or IN clause to delete only the records that you uploaded into your temp table.
Bulk loads are a well-optimized path in SQL Server and it should be very fast.
For example:
Execute the statement CREATE TABLE #RowsToDelete(ID INT PRIMARY KEY)
Use a bulk load to insert keys into #RowsToDelete
Execute DELETE FROM myTable where Id IN (SELECT ID FROM #RowsToDelete)
Execute DROP TABLE #RowsToDelte (the table will also be automatically dropped if you close the session)
(Assuming Dapper) code example:
conn.Open();
var columnName = "ID";
conn.Execute(string.Format("CREATE TABLE #{0}s({0} INT PRIMARY KEY)", columnName));
using (var bulkCopy = new SqlBulkCopy(conn))
{
bulkCopy.BatchSize = ids.Count;
bulkCopy.DestinationTableName = string.Format("#{0}s", columnName);
var table = new DataTable();
table.Columns.Add(columnName, typeof (int));
bulkCopy.ColumnMappings.Add(columnName, columnName);
foreach (var id in ids)
{
table.Rows.Add(id);
}
bulkCopy.WriteToServer(table);
}
//or do other things with your table instead of deleting here
conn.Execute(string.Format(#"DELETE FROM myTable where Id IN
(SELECT {0} FROM #{0}s", columnName));
conn.Execute(string.Format("DROP TABLE #{0}s", columnName));
To get this code working, I went dark side.
Since Dapper makes my list into parameters. And SQL Server can't handle a lot of parameters. (I have never needed even double digit parameters before). I had to go with Dynamic SQL.
So here was my solution:
string listOfIdsJoined = "("+String.Join(",", listOfIds.ToArray())+")";
connection.Execute("delete from myTable where Id in " + listOfIdsJoined);
Before everyone grabs the their torches and pitchforks, let me explain.
This code runs on a server whose only input is a data feed from a Mainframe system.
The list I am dynamically creating is a list of longs/bigints.
The longs/bigints are from an Identity column.
I know constructing dynamic SQL is bad juju, but in this case, I just can't see how it leads to a security risk.
Dapper request the List of object having parameter as a property so in above case a list of object having Id as property will work.
connection.Execute("delete from myTable where Id in (#Id)", listOfIds.AsEnumerable().Select(i=> new { Id = i }).ToList());
This will work.

How do I query SQLite Database in Android?

I successfully created the Database and inserted a row however I cannot Query it for some reason. My Droid crashes everytime.
// Create a new row of values to insert.
ContentValues newValues = new ContentValues();
// Assign values for each row.
newValues.put("value", "kunjan");
// Insert the row into your table
myDatabase.insert(DATABASE_TABLE, null, newValues);
String[] result_columns = new String[] { "value" };
// I GET AN EXCEPTION HERE
Cursor allRows = myDatabase.query(true, DATABASE_TABLE, result_columns, null, null,
null, null, null, null);
if (allRows.moveToFirst()) {
String value = allRows.getString(0);
TextView foo = (TextView) findViewById(R.id.TextView01);
foo.setText(value);
}
allRows.close();
myDatabase.close();
I get this exception
no such column: value: , while compiling: SELECT DISTINCT value FROM mainTable
I think you are creating table with only one column(value) and you are trying to read the column-1 instead of column-0 from the cursor. If that still not helps, please add try-catch block and capture the logs
String value = allRows.getString(0);
Some of the problems with the original post
Creating a table that has already been created. I wrote "if not exists" in the "create table" instruction.
Not handling Exceptions properly. I got a lot of exceptions in the creation of the table, quering it etc. I sorrounded all these operations with try-catch
Using hard coded values for Column names - This made it harder to verify if I was querying the column that I had created.

Resources