In a single session, if I want to:
// make a query on Foo table to get one instance
// update this instance
// commit() or not?
// make the same query on Foo table
Will I get the same result in these two queries? That's to say, is it necessary to commit the update before query on the table within a single session?
Thanks!
It is not necessary to commit prior to making the query again. As a general principle, updates within a transaction (session) will be visible to subsequent queries in that same transaction, even prior to a commit.
Having said that, doing the same exact query twice within a transaction might be "code smell". It's worth considering, since the updated object is already memory, is it really necessary to query the object again?
Also, depending on the database isolation level, the second query is not guaranteed to return the same result set as the first one. This can happen if another transaction modifies the data prior to the second query.
It's not necessary to do both commits, as each transaction is visible to subsequent actions in the database (or queries).
You can just put the commit at the end, although I am not sure if multiple commits will affect runtime.
Related
I am updating a column in a SQL table and I want to check if it was updated successfully or it was updated already and my query didn't do anything
as we get ##rowcount in SQL Server.
In my case, I want to update a column named lockForProcessing, so if it is already processing, then my query would not affect any row, it means someone else is already processing it, else I would process it.
If I understand you correctly, your problem is related to a multi threading / concurrency problem, where the same table may be updated simultaneously.
You may want to have a look at the :
Chapter 11. Transactions And Concurrency
The ISession is not threadsafe!
The entity is not stored the moment the code session.SaveOrUpdate() is executed, but typically after transaction.Commit().
stored and commited are two different things.
The entity is stored after any session.Flush(). Depending on the IsolationLevel, the entity won't be seen by other transactions.
The entity is commited after a transaction.Commit(). A commit also flushes.
Maybe all you need to do is choose the right IsolationLevel when beginning transactions and then read the table row to get the current value:
using (var transaction = session.BeginTransaction(IsolationLevel.Serializable))
{
session.Get(); // Read your row
transaction.Commit();
}
Maybe it is easier to create some locking or pipeline mechanism in your application code though. Without knowing more about who is accessing the database (other transactions, sessions, processes?) it is hard to answer more precisely.
Not sure if this has been asked before cause while typing the title text the possible duplicate given suggestion's doesn't match.
One of my colleague asked if a DML trigger functioning can be replaced totally with a stored procedure(SP). Well sounds bit weird at first but it's possible cause trigger is also a special type of SP but not explicitly callable.
I mean say for example: a AFTER INSERT Trigger named trg_insert1 defined on tbl1 which does update some data in in tbl2 like below (taken a SQL Server Example but question is not specific to any DB)
create trigger trg_insert1
after insert on tbl1
foreach row
begin
update tbl2 set somedata = inserted.tbl1somedata
where id = inserted.tbl1id;
end
Now this trigger can be replaced with a SP like below (using transaction block);
create procedure usp_insertupdate (#name varchar(10), #data varchar(200))
as
begin
begin try
begin trans
insert into tbl1(name, data) values(#name, #data);
update tbl2 set somedata = #data
where id = scope_identity();
commit trans
end try
begin catch
if ##TRANCOUNT > 0
rollback trans
end catch
end
Which will work perfectly in almost all cases of DML trigger like after/before -> insert/delete/update. BUT I really couldn't answer/explain
what the difference then?
Is it a good practice to do so?
Is it not possible in all cases?
Am I being thinking it over complex.
Please let me know what you think.
[NOTE: Not a specific RDBMS related question though]
I'll try to answer in a very general sense (you specified this is not targeted to a specific implementation).
First of all, a trigger is written in the same data manipulation language that you would use for a stored procedure. So in terms of capabilities Trigger and Stored Procedure are the same.
But...
a trigger is guaranteed to be invoked every time you alter the data, no matter if you do that through a stored procedure, another trigger, or by manually executing a SQL statement.
In fact you can expect a trigger to always execute (for its triggering statement) unless you explicitly disable it.
A stored procedure, on the other hand it is guaranteed never to run by itself unless you explicitly run it.
This has an important consequence: triggers are better at ensuring consistency. If someone in a hurry removes a record in your live instance by typing:
Delete from tablex where uid="QWTY10311"
any bookkeeping action implemented as a trigger will be executed, while if the user forgets (or maliciously avoid) following this with
Execute SP_TABLEX_LOG("DELETE","QWTY10311")
your DB will just lose the data silently.
Triggers have two other important characteristics that can be duplicated with stored procedures only through extra (sometimes significantly more expensive) effort.
First of all they are executed record-by-record. So if you are deleting 1 million records the logging will be performed for each operation. Good luck calling the appropriate stored procedure with a 1 million rows cursor as a parameter, ESPECIALLY if you want to do that after a manual operation as in my example above.
Second advantage: Triggers have a special scope where they can reference pre- and post- change values for each field.
So if you are incrementing a table of prices by 10% and want to log what the previous value was, and which user performed the action at what time, you will have "old-value", "new-value", "user-id" and "timestamp" in scope for any kind of operation you may want to do.
Again, doing this by invoking a stored procedure means you have to save the values to pass them to the stored procedure when it runs.
So why bother with SP anyway? (this will answer, hopefully, your question about "best use case").
Stored Procedure are better when you need to create complex business logic which will be invoked by an application layer. So if you want to know, for example, how many hotel rooms are available between two given dates and with the extra requirement that pets are allowed, a trigger would not be a good idea.
Especially because a trigger will not return any result to an invoking process...
So anytime you need to get some result to the caller, be it a query, a calculation, or anything else that has OUTPUT parameters, a trigger is useless.
Triggers should be used to enforce consistency. If a header record should not be deleted unless it has no children in other tables, enforce this with a trigger, maybe. If you need to log whoever changes a value in a field, no matter how, use a trigger.
In all other cases, use a stored procedure (keep also in mind that triggers will impact the responsiveness of any data update, just like indexes).
Yes stored procedures can be used to replace DML triggers in this way, and whether it is a good practice or not depends on your needs.
The main difference is that a trigger will run its code every time it is fired. In your example, if a user does an ad-hoc INSERT to tbl1, a trigger will fire and tbl2 will get updated.
A stored procedure can only be used to enforce this rule if ad-hoc INSERTs are not allowed.
Say that a method only reads data from a database and does not write to it. Is it always the case that such methods don't need to run within a transaction?
In many databases a request for reading from the database which is not in an explicit transaction implicitly creates a transaction to run the request.
In a SQL database you may want to use a transaction if you are running multiple SELECT statements and you don't want changes from other transactions to show up in one SELECT but not an earlier one. A transaction running at the SERIALIZABLE transaction isolation level will present a consistent view of the data across multiple statements.
No. If you don't read at a specific isolation level you might not get enough guarantees. For example rows might disappear or new rows might appear.
This is true even for a single statement:
select * from Tab
except select * from Tab
This query can actually return rows in case of concurrent modifications because it scans the table twice.
SQL Server: There is an easy way to get fast, nonblocking, nonlocking, consistent reads: Enable snapshot isolation and read in a snapshot transaction. AFAIK Oracle has this capability as well. Postgres too.
the purpose of transaction is to rollback or commit the operations done to a database, if u are just selecting values and making no change in the data there is no need of transaction.
I have a question about the examples in this article:
http://code.google.com/appengine/articles/transaction_isolation.html
Suppose I put Adam and Bob in the same entity group and modify the operation getTallPeople to only check the height of Adam and Bob (i.e. access only entities in the entity group). Now, if I execute the following statements:
begin transaction
updatePerson (update Adam's height to 74 inches)
commit transaction
begin transaction
getTallPeople
commit transaction
Can I be sure that getTallPeople will always return both Adam and Bob? I.e. if entity/index updates have not completed, will the second transaction wait until they have? Also, would the behavior be the same without using a transaction for getTallPeople?
Thanks for your help!
Yes. For getTallPeople to be called within a transaction, it must use an "ancestor" filter in its query to limit its results to members of the group. If it does so, both the index data it uses to determine the results and the entities it fetches based on those results will be strongly consistent with the committed results of the previous transaction. This is also true without the explicit transaction if the query uses an ancestor filter and you're using the HR datastore. (The HR datastore has been the default for a while, so you're probably using it.)
If getTallPeople performs a query without an ancestor filter and you're using the HR datastore, it will use the global index data, which is only guaranteed to be eventually consistent across the dataset. In this case, the query might see index data for the group prior to the previous transaction, even though the previous transaction has already committed.
I don't use Stored procedures very often and was wondering if it made sense to wrap my select queries in a transaction.
My procedure has three simple select queries, two of which use the returned value of the first.
In a highly concurrent application it could (theoretically) happen that data you've read in the first select is modified before the other selects are executed.
If that is a situation that could occur in your application you should use a transaction to wrap your selects. Make sure you pick the correct isolation level though, not all transaction types guarantee consistent reads.
Update :
You may also find this article on concurrent update/insert solutions (aka upsert) interesting. It puts several common methods of upsert to the test to see what method actually guarantees data is not modified between a select and the next statement. The results are, well, shocking I'd say.
Transactions are usually used when you have CREATE, UPDATE or DELETE statements and you want to have the atomic behavior, that is, Either commit everything or commit nothing.
However, you could use a transaction for READ select statements to:
Make sure nobody else could update the table of interest while the bunch of your select query is executing.
Have a look at this msdn post.
Most databases run every single query in a transaction even if not specified it is implicitly wrapped. This includes select statements.
PostgreSQL actually treats every SQL statement as being executed within a transaction. If you do not issue a BEGIN command, then each individual statement has an implicit BEGIN and (if successful) COMMIT wrapped around it. A group of statements surrounded by BEGIN and COMMIT is sometimes called a transaction block.
https://www.postgresql.org/docs/current/tutorial-transactions.html