Even a transaction is nested, it won't be updated until outmost transaction commits. So what's the meaning of nested transaction and what's the specific situation that requires the feature?
for example assume such a situation:
Class A
{
/// props
save();
}
class B
{
A a{get;set};
// other props
save();
}
now when you want to save B, you first save A assume in saving A you have some service calls for verification or etc, such a situation in saving B (some verifications) so you need rollback if B can not verified, also you should to rollback when you can't verify A so you should have nested one (in fact Separation of concern cause to this, you can mix all things and have a spaghetti code without nested transaction).
There is nothing called Nested Transactions.
The only transaction that SQL considers, is the outermost one. It's the one that's committed or is rolled back. Nested Transactions are syntactically valid, that's all. Suppose you call a procedure from inside a transaction, and that procedure has transactions itself, the syntax allows you to nest transactions, however the only one that has any effect is the outermost.
edit: Reference here : http://www.sqlskills.com/BLOGS/PAUL/post/A-SQL-Server-DBA-myth-a-day-(2630)-nested-transactions-are-real.aspx
Related
In a single session, if I want to:
// make a query on Foo table to get one instance
// update this instance
// commit() or not?
// make the same query on Foo table
Will I get the same result in these two queries? That's to say, is it necessary to commit the update before query on the table within a single session?
Thanks!
It is not necessary to commit prior to making the query again. As a general principle, updates within a transaction (session) will be visible to subsequent queries in that same transaction, even prior to a commit.
Having said that, doing the same exact query twice within a transaction might be "code smell". It's worth considering, since the updated object is already memory, is it really necessary to query the object again?
Also, depending on the database isolation level, the second query is not guaranteed to return the same result set as the first one. This can happen if another transaction modifies the data prior to the second query.
It's not necessary to do both commits, as each transaction is visible to subsequent actions in the database (or queries).
You can just put the commit at the end, although I am not sure if multiple commits will affect runtime.
I have just begun to read about Salesforce APEX and its DML. It seems you can do bulk updates by constructing a list, adding items to be updated to the list, and then issuing an update myList call.
Does such an invocation of update create an atomic transaction, so that if for any reason an update to one of the items in the list should fail, the entire update operation is rolled back? If not, is there a way to wrap an update in an atomic transaction?
Your whole context is an atomic transaction. By the time Apex code runs SF has already started, whether it's a Visualforce button click, a trigger or any other entry point. If you hit a validation error, null pointer reference exception, attempt to divide by zero, thrown exception etc - whole thing will be rolled back.
update myList; works in "all or nothing" mode. If one of records fails on validation rule required field missing etc - you'll get an exception. You can wrap it in a try-catch block but still - whole list just failed to load.
If you need "save what you can" behavio(u)r - read up about Database.update() version of this call. It takes optional parameter that lets you do exactly that.
Last but not least if you're inserting complex scenarios (insert account, insert contacts, one of contacts fails but you had that in try-catch so the account saved OK so what now, do you have to manually delete it? Weak...) you have Database.setSavepoint() and Database.rollback() calls.
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_dml_database.htm
https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/langCon_apex_transaction_control.htm
https://salesforce.stackexchange.com/questions/9410/rolling-back-dml-operation-in-apex-method
i have a quick question about objectify - this may be in the actual documentation but i haven't found anything so i'm asking here to be safe.
i have a backend using objectify that I kind of rushed out - what i would like to do is the following - i have an event plan that is made up of activities. currently, if i delete an event i am actually writing in all of the logic to delete the individual activities inside the event plans delete method.
what i'm wondering is, if i call the activitys delete method from the event plans delete method (if it lets me do this), is it atomic?
sample (this is just pseudo code - not actual - case and method names may be wrong):
// inside event plan dao
public void delete(EventPlan eventPlan) {
final Objectify ofy = Objectify.beginTransaction();
try {
final ActivityDAO activityDao = new ActivityDAO();
for (final Activity activity : eventPlan.getActivities()) {
activityDao.delete(activity);
}
ofy.getTxn().commit();
} finally {
if (ofy.getTxn().isActive()) {
ofy.getTxn().rollback();
|
}
}
// inside activity dao
public void delete(Activity activity) {
final Objectify ofy = Objectify.beginTransaction();
try {
// do some logic in here, delete activity and commit txn
} finally {
// check and rollback as normal
}
}
is this safe to do? - as it is right now, the reason it's so mangled is because i didn't realize the entity group issue - there were certain things in the activity that were not in the same entity group as the activity itself - after fixing this i put all of the logic in the event plan delete and the method is becoming unmanageable - is it ok to break stuff down into smaller pieces or will it break atomicity.
thank you
Nested transactions do not happen in a single atomic chunk. There is not really any such thing as a nested transaction - the transactions in your example are all in parallel, with different Objectify (DatastoreService) objects.
Your inner transactions will complete transactionally. Your outer transaction doesn't really do anything. Each inner delete is in its own transaction - it is still perfectly possible for the first Activity to be successfully deleted even though the second Activity is not deleted.
If your goal is to delete a group of entities all-or-nothing style, look into using task queues. You can delete the first Activity and enqueue a task to delete the second transactionally, so you can be guaranteed that either the Activity will be deleted AND the task enqueued, or neither. Then, in the task, you can do the same with the second, etc. Since tasks are retried if they fail, you can control the behavior to be sort of like a transaction. The only thing to beware of are results in other requests including the partially-deleted series during the process.
If he removes the inner transaction will the outer transaction still do nothing?
I don't use Stored procedures very often and was wondering if it made sense to wrap my select queries in a transaction.
My procedure has three simple select queries, two of which use the returned value of the first.
In a highly concurrent application it could (theoretically) happen that data you've read in the first select is modified before the other selects are executed.
If that is a situation that could occur in your application you should use a transaction to wrap your selects. Make sure you pick the correct isolation level though, not all transaction types guarantee consistent reads.
Update :
You may also find this article on concurrent update/insert solutions (aka upsert) interesting. It puts several common methods of upsert to the test to see what method actually guarantees data is not modified between a select and the next statement. The results are, well, shocking I'd say.
Transactions are usually used when you have CREATE, UPDATE or DELETE statements and you want to have the atomic behavior, that is, Either commit everything or commit nothing.
However, you could use a transaction for READ select statements to:
Make sure nobody else could update the table of interest while the bunch of your select query is executing.
Have a look at this msdn post.
Most databases run every single query in a transaction even if not specified it is implicitly wrapped. This includes select statements.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
https://www.postgresql.org/docs/current/tutorial-transactions.html
I've been sorting out the whole nested transaction thing in SQL server, and I've gleamed these nuggets of understanding of behavior of nested trans':
When nesting transactions, only the
outermost commit will actually
commit.
"Commit Trans txn_name", when nested
, will always apply to the innermost
transaction, even if txn_name refers
to an outer transaction.
"ROLLBACK TRAN" (no name) , even in
an inner transaction, will rollback
all transactions.
"ROLLBACK TRAN txn_name" - txn_name must
refer to the outermost txn name.
If not, it will fail.
Given these , is there any benefit of naming transactions? You cannot use it to target a specific tranasction, either for commit or rollback.
Is it only for code commenting purposes?
Thanks,
Yoni
Effectively it's just a programmers aide memoire. If you're dealing with a Tx that has a number of inner transactions, giving each meaningful names can help you make sure that the tranactions are appropriately nested and may catch logic errors.
You can have procedures rollback only their own work on error, allowing the caller to decide wether to abandon the entire transaction or recover and try an alternate path. See Exception handling and nested transactions for a procedure template that allows this atomic behavior.
The idea is to roll back part of your work, like a nested transaction. Does not always work as intended.
Stored procedures using old-style error handling and savepoints may not work as intended when they are used together with TRY … CATCH blocks: Avoid mixing old and new styles of error handling.
Already discussed here ##ERROR and/or TRY - CATCH