SalesForce DML set-based operations and atomic transactions - salesforce

I have just begun to read about Salesforce APEX and its DML. It seems you can do bulk updates by constructing a list, adding items to be updated to the list, and then issuing an update myList call.
Does such an invocation of update create an atomic transaction, so that if for any reason an update to one of the items in the list should fail, the entire update operation is rolled back? If not, is there a way to wrap an update in an atomic transaction?

Your whole context is an atomic transaction. By the time Apex code runs SF has already started, whether it's a Visualforce button click, a trigger or any other entry point. If you hit a validation error, null pointer reference exception, attempt to divide by zero, thrown exception etc - whole thing will be rolled back.
update myList; works in "all or nothing" mode. If one of records fails on validation rule required field missing etc - you'll get an exception. You can wrap it in a try-catch block but still - whole list just failed to load.
If you need "save what you can" behavio(u)r - read up about Database.update() version of this call. It takes optional parameter that lets you do exactly that.
Last but not least if you're inserting complex scenarios (insert account, insert contacts, one of contacts fails but you had that in try-catch so the account saved OK so what now, do you have to manually delete it? Weak...) you have Database.setSavepoint() and Database.rollback() calls.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_dml_database.htm
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_transaction_control.htm
https://salesforce.stackexchange.com/questions/9410/rolling-back-dml-operation-in-apex-method

Related

Salesforce : How to prevent 'upsert' (executed from an apex Job) of a record, if the record has a checkbox field which is marked false?

Salesforce question :
We have to update the accounts from a schedulable Job. New records inserting are no problem, but existing records should only be updated if a particular checkbox field is set to true on that (otherwise not to be updated). Also since we know that the apex code runs from a system context.
I am looking for a way which DOES NOT involve pulling the record from the code by searching using the Id and then checking that field value before upserting.
Thank you for helping.
Code
List<Account> accountList = new List<Account>(accountsToUpdate);
upsert accountList MY_COMPOSITE_KEY__c;
Make a validation rule simply has Your_Checkbox__c as error condition. Or make a before insert, before update trigger on Account (if you don't have one already) that would inspect all records in trigger.new and call addError() on them. Validation rule is slightly preferred because it's just config, no code.
The problem with either is that it will cause your whole transaction to die. If your batch updates 200 accounts and one of them has this checkbox - this fail will block updating them all. This is done to ensure system's state is stable (read up about "atomic operations" or "ACID"), you wouldn't want data that's halfway updated...
So probably you'll have to mitigate that by calling Database.upsert(accountsToUpdate, External_ID__c, false); so it saves what it can and doesn't throw exceptions...

Handle when a trigger is executed without having to indicate the columns

Currently I have two triggers on the same table for the UPDATE action, one of them is responsible for the implementation of existing audit fields in the table. The other is responsible for checking the updated records to perform validation. The problem is that although the Trigger prioritize verification, above the audit as the two run the upgrade option comes into a small "loop" and makes perform the checks 2 times instead of one.
I know that if I point to clause IF (UPDATE (Camp1)) BEGIN .... It will allow me to run only when certain fields are modified, but is there any other way?

Disable a trigger from a trigger Oracle

I've stumbled on a situation where I need to disable a trigger from a trigger before doing an update, and then renable it.
Basically, I have two tables:
TIME_SLOTS has fields such as start time, end time, to set the time
slot for a programme as well as programme ID (foreign key) to specify
which program.
PROGRAMMES contains a list of all the different available programs
& their details. Also contains a duration.
I have an existing trigger that, when updating or inserting to TIME_SLOTS, the trigger lookups the duration from PROGRAMMES and ensures that End Time = Start Time + Duration.
I also want to add a new trigger that updates the End Time in TIME_SLOTS when changing the duration in PROGRAMMES.
I have set these two triggers up, but when changing the duration I get:
One error saving changes to table "SE217"."PROGRAMMES":
Row 1: ORA-04091: table SE217.PROGRAMMES is mutating, trigger/function may not see it
ORA-06512: at "SE217.SCHEDULES_VALID_TIMES", line 19
ORA-04088: error during execution of trigger 'SE217.SCHEDULES_VALID_TIMES'
ORA-06512: at "SE217.UPDATE_END_TIME", line 5
ORA-04088: error during execution of trigger 'SE217.UPDATE_END_TIME'
This is obviously because when I change the duration, the 2nd trigger goes to update the end time in TIME_SLOTS. The trigger on TIME_SLOTS fires and looks up the duration - the duration is mutating and I get the error as above.
It seems to me that when I update the TIME_SLOTS row with the newly calculated end time, I should just disable the trigger before and renable after the update - but as this is trigger I can't alter a trigger...
Any ideas?
EDIT: I had a thought that I could set a global variable and check this var in the trigger that I don't want to run etc - but wasn't sure how best to implement?
You can almost certainly disable one trigger from another using an EXECUTE IMMEDIATE statement:
EXECUTE IMMEDIATE 'ALTER TRIGGER trigger_name_here DISABLE';
However, you definitely shouldn't be using triggers for application logic. It's a messy business, not least due to the fact that triggers aren't guaranteed to fire in order, but also because of the kind of "problem" you're experiencing.
It would be much easier and significantly safer to move all of the functionality you described to a stored procedure or package, and use triggers only where necessary for validation purposes.
Those kinds of problems occurs when you have to customize an existing functionality and you just have full control on database. So you are not able to replace the inserts/updates by a procedure, you can just react. In this situation you have triggers on both tables and propagate values between the tables in both directions.

Safely deleting a Django model from the database using a transaction

In my Django application, I have code that deletes a single instance of a model from the database. There is a possibility that two concurrent requests could both try to delete the same model at the same time. In this case, I want one request to succeed and the other to fail. How can I do this?
The problem is that when deleting a instance with delete(), Django doesn't return any information about whether the command was successful or not. This code illustrates the problem:
b0 = Book.objects.get(id=1)
b1 = Book.objects.get(id=1)
b0.delete()
b1.delete()
Only one of these two delete() commands actually deleted the object, but I don't know which one. No exceptions are thrown and nothing is returned to indicate the success of the command. In pure SQL, the command would return the number of rows deleted and if the value was 0, I would know my delete failed.
I am using PostgreSQL with the default Read Commited isolation level. My understanding of this level is that each command (SELECT, DELETE, etc.) sees a snapshot of the database, but that the next command could see a different snapshot of the database. I believe this means I can't do something like this:
# I believe this wont work
#commit_on_success
def view(request):
try:
book = Book.objects.get(id=1)
# Possibility that the instance is deleted by the other request
# before we get to the next delete()
book.delete()
except ObjectDoesntExist:
# Already been deleted
Any ideas?
You can put the constraint right into the SQL DELETE statement by using QuerySet.delete instead of Model.delete:
Book.objects.filter(pk=1).delete()
This will never issue the SELECT query at all, just something along the lines of:
DELETE FROM Book WHERE id=1;
That handles the race condition of two concurrent requests deleting the same record at the same time, but it doesn't let you know whether your delete got there first. For that you would have to get the raw cursor (which django lets you do), .execute() the above DELETE yourself, and then pick up the cursor's rowcount attribute, which will be 0 if you didn't wind up deleting anything.

NHibernate session.flush() fails but makes changes

We have a SQL Server database table that consists of user id, some numeric value, e.g. balance, and a version column.
We have multiple threads updating this table's value column in parallel, each in its own transaction and session (we're using a session-per-thread model). Since we want all logical transaction to occur, each thread does the following:
load the current row (mapped to a type).
make the change to the value, based on old value. (e.g. add 50).
session.update(obj)
session.flush() (since we're optimistic, we want to make sure we had the correct version value prior to the update)
if step 4 (flush) threw StaleStateException, refresh the object (with lockmode.read) and goto step 1
we only do this a certain number of times per logical transaction, if we can't commit it after X attempts, we reject the logical transaction.
each such thread commits periodically, e.g. after 100 successful logical transactions, to keep commit-induced I/O to manageable levels. meaning - we have a single database transaction (per transaction) with multiple flushes, at least once per logical change.
what's the problem here, you ask? well, on commits we see changes to failed logical objects.
specifically, if the value was 50 when we went through step 1 (for the first time), and we tried to update it to 100 (but we failed since e.g. another thread changed it to 70), then the value of 50 is committed for this row. obviously this is incorrect.
What are we missing here?
Well, I do not have a ton of experience here, but one thing I remember reading in the documentation is that if an exception occurs, you are supposed to immediately rollback the transaction and dispose of the session. Perhaps your issue is related to the session being in an inconsistent state?
Also, calling update in your code here is not necessary. Since you loaded the object in that session, it is already being tracked by nhibernate.
If you want to make your changes anyway, why do you bother with row versioning? It sounds like you should get the same result if you simply always update the data and let the last transaction win.
As to why the update becomes permanent, it depends on what the SQL statements for the version check/update look like and on your transaction control, which you left out of the code example. If you turn on the Hibernate SQL logging it will probably become obvious how this is happening.
I'm not a nhibernate guru, but answer seems simple.
When nhibernate loads an object, it expects it not to change in db as long as it's in nhibernate session cache.
As you mentioned - you got multi thread app.
This is what happens=>
1st thread loads an entity
2nd thread loads an entity
1st thread changes entity
2nd thread changes entity and => finds out that loaded entity has changed by something else and being afraid that it has screwed up changes 1st thread made - throws an exception to let programmer be aware about that.
You are missing locking mechanism. Can't tell much about how to apply that properly and elegantly. Maybe Transaction would help.
We had similar problems when we used nhibernate and raw ado.net concurrently (luckily - just for querying - at least for production code). All we had to do - force updating db on insert/update so we could actually query something through full-text search for some specific entities.
Had StaleStateException in integration tests when we used raw ado.net to reset db. NHibernate session was alive through bunch of tests, but every test tried to cleanup db without awareness of NHibernate.
Here is the documention for exception in the session
http://nhibernate.info/doc/nhibernate-reference/best-practices.html

Resources