SqlDataAdapter UpdateBatchSize row errors - sqldataadapter

The DataAdapter has a ContinueUpdateOnError property which you can set
to 'True' which will continue processing the DataAdapter.Update
command, even if an error is encountered. This is ideal, so I can
catch errors at each row of data which fails to get inserted. However,
this ONLY works properly if the batch processing is turned OFF
(DataAdapter.UpdateBatchSize = 1).
If the DataAdapter.UpdateBatchSize is set to 0 or another number which
would turn batch processing on, and an error occurs within a record of
the batch, then the whole batch fails the update. Obviously, this is
not what I want to happen.
Ideally, I'd like to get a mix of the two scenarios. I'd like to be
able to use batch processing, thereby having less round-trips to the
database on insert of the rows, but at the same time I'd like to be
able catch each individual row error as it occurs (which for some
reason doesn't work when batch is turned on, and ContinueUpdateOnError
is set to 'True'). To me, it looks like it has to be one way or
another. I either Insert each row individually with a trip to the
database for each insert, with the ability to catch each row error, or
I try to send batches to the server, and if a row fails in the batch,
then the whole batch fails.
Any thoughts?

From MSDN's page "DataAdapter.ContinueUpdateOnError Property":
If ContinueUpdateOnError is set to true, no exception is thrown when an error occurs during the update of a row. The update of the row is skipped and the error information is placed in the RowError property of the row in error. The DataAdapter continues to update subsequent rows.
And from MSDN's page "Performing Batch Operations Using DataAdapters (ADO.NET)":
Batch execution has the same effect as the execution of each individual statement. Statements are executed in the order that the statements were added to the batch. Errors are handled the same way in batch mode as they are when batch mode is disabled.
So either you've done something wrong or Microsoft's documentation on this isn't reliable.

Related

Snowflake: Fail COPY INTO incase of error limit while loading?

Is it possible to set the error limit while loading the data into a snowflake table. I am using COPY INTO option. I know there are options like RETURN_FAILED_ONLY and VALIDATION_MODE, but this does not support if the error limit is reached then fail COPY INTO else continue loading it by ignoring the failed records.
I believe what you are looking for is the SKIP_FILE_num or SKIP_FILE_num%. This will skip the file when a certain number of records or certain % of records fail. When a file is skipped it will be listed with a status of FAILED.
https://docs.snowflake.com/en/sql-reference/sql/copy-into-table.html
Snowflake does not currently have the equivalent that you are looking for across all files of a load. Depending on how you are scripting/executing your COPY INTO commands, however, you could wrap the COPY INTO command with a transaction, check the output of the COPY INTO, determine whether it is inside or outside of your threshold and then either commit or rollback the transaction. This would accomplish what you are looking for, but takes a bit of custom coding to accomplish.

SalesForce DML set-based operations and atomic transactions

I have just begun to read about Salesforce APEX and its DML. It seems you can do bulk updates by constructing a list, adding items to be updated to the list, and then issuing an update myList call.
Does such an invocation of update create an atomic transaction, so that if for any reason an update to one of the items in the list should fail, the entire update operation is rolled back? If not, is there a way to wrap an update in an atomic transaction?
Your whole context is an atomic transaction. By the time Apex code runs SF has already started, whether it's a Visualforce button click, a trigger or any other entry point. If you hit a validation error, null pointer reference exception, attempt to divide by zero, thrown exception etc - whole thing will be rolled back.
update myList; works in "all or nothing" mode. If one of records fails on validation rule required field missing etc - you'll get an exception. You can wrap it in a try-catch block but still - whole list just failed to load.
If you need "save what you can" behavio(u)r - read up about Database.update() version of this call. It takes optional parameter that lets you do exactly that.
Last but not least if you're inserting complex scenarios (insert account, insert contacts, one of contacts fails but you had that in try-catch so the account saved OK so what now, do you have to manually delete it? Weak...) you have Database.setSavepoint() and Database.rollback() calls.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_dml_database.htm
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_transaction_control.htm
https://salesforce.stackexchange.com/questions/9410/rolling-back-dml-operation-in-apex-method

SQL: Avoid batch failing when processing

I have a SQL Server job that picks up a max of 1000 items each time in the queue for processing at an interval of 1 minutes.
In the job I use MERGE INTO to the table I need and mark the status of these items as complete and the job completes and will process the next batch in the next interval.
All good so far except recently there has been an incident where there is a problem in one of the item and since we are processing the batch in one single SQL statement, the whole batch has failed due to that one single item.
No big deal as we later identified the faulty one and have it patched and re-processed the whole failed batch.
What I am interested to know is what are some of the things I can do to avoid failing of the entire batch?
This time I know that the reason of this faulty item hence I can add a check to flush out this faulty item before the single MERGE INTO statement, but that does not cover other unknown errors.

sql server: Send email about updates during transaction

I have a job that runs clean up queries from a table. There's a cursor that rolls through the queries, if one fails there's a try catch that will get the error message from the query on the table and database. Which puts that information in an email with sp_send_dbemail.
I am wondering if it is possible to change the catch block after the query runs to look for transactions that were successful. Then get the updated rows, or maybe just IDs for the rows, and put those IDs in an email?
Or would it be easier to just look for rows to update in the query when the job is running it to create an email after the updates happen?
It sounds like the output clause might be useful here. You can find many examples of usage by searching. In short, you add the output clause to each "cleaning" statement and capture the information you need. Yes - each statement. I am doubtful about your goal - but that is a different issue.
And btw - the catch block runs when an error occurs. It does not make much sense to use the catch block to look for the effects of successful statements. In the catch block you know that the most recently executed statement in your try block failed. And of course, if every statement is successful the catch block never executes.

Lost Update Anomaly in Sql Server Update Command

I am very much confused.
I have a transaction in ReadCommitted Isolation level. Among other things I am also updating a counter value in it, something similar to below:
Update tblCount set counter = counter + 1
My application is a desktop application and this transaction happens to occur quite frequently and concurrently. We recently noticed an error that sometimes the counter value doesn't get updated or is missed. We also insert one record on each counter update so we are sure that records have been inserted but somehow counter fails to update. This happens once in 2000 simulaneous transactions.
I seriously doubt it is a lost update anomaly I am facing but if you look at the command above, it's just update the counter from its own value: if I have started a transaction and the transaction has reached this statement, it should have locked the row. This should not cause lost update, but it's happening somehow.
Is the thing that this update command works in two parts? Like first it reads the counter value (during which it doesn't get the exclusive lock) and then writes the new calculated value (when it does get an exclusive lock)?
Please help, I have got really confused.
The update command does not work in two parts. It only works in one.
There's something else going on, and my first guess would be that your transaction is rolling back for another reason. Out of those 2,000 transactions, for example, one may be rolling back - especially if you're doing a ton of things concurrently - and it didn't succeed at all.
That update may not have been what caused the problem, either - you may have deadlocks involved due to other transactions, and they may be failing before the update command (or during the update command).
I'd zoom out and ask questions about the transaction's error handling. Are you doing everything in try/catch blocks? Are you capturing error levels when transactions fail? If not, you'll need to capture a trace with Profiler to find out what's going on.
Are you sure that the SQL is always succeeding? What I mean is, could it be something like an occasional lock time-out? Are you handling SQL exceptions in your .Net code in a way that will be aware of them (i.e a pop-up message or a log entry)?

Resources