I am copying some data from sql server to firebird over the network. Because of integrity I need to use transactions, but I am transferring about 9k of rows. Has reading in opened transaction some negative influence on reading costs, against reading in non transaction mode?
"Non-Transaction mode" simply doesn't exists, there is always a transaction, whether you declare it or not. The question is really whether is there any difference between reading in an implicit transaction vs. reading in an explicit transaction. And the short answer is no, there is no difference.
There may be some difference if you use an explicit transaction under a higher isolation level, other than the default READ_COMMITTED. It also depends whether you do anything else in an explicit transaction, but all these details cannot be infered from the frugal information in your post.
The default transaction isolation level is READ COMMITTED. It will lock the table for others while querying it.
MSDN on transaction isolation level:
http://msdn.microsoft.com/en-us/library/ms173763.aspx
I've had the issue of an occasional strange error message concerning locking deadlock. This even happened to stackoverflow - see this great article by Jeff Atwood. I strongly recommend switching to 'read committed snapshot' which solved the error + my performance issues.
http://www.codinghorror.com/blog/2008/08/deadlocked.html
Related
This question may have been asked but from what I've read I still don't clearly understand what's happening. My question deals specifically with using the Read Uncommitted isolation level in SQL Server - via Entity Framework (i.e. setting the isolation level on the DB object in the contest when starting a transaction).
My question is - the documentation states that using read uncommitted will allow other transactions/queries to read the tables involved in the initial transaction. Now this works fine up until a write based operation is performed. At that point the table is locked. I've tested this out by starting a transaction in a project and stopping in a breakpoint after the update. Then tried to access the related table in SQL Server Management Studio. I am not able to do so. This makes me wonder what the purpose of read uncommitted is if you can't read uncommitted. I have read that people state that there are two types of locking behaviour and the second one which is not affected by the isolation level is what is locking the table. This still doesn't explain why the read uncommitted is not working - again it states that it allows reading of uncommitted operations (but seems to not allow it).
Peter
I'm looking into optimizing throughput in a Java application that frequently (100+ transactions/second) updates data in a Postgresql database. Since I don't mind losing a few transactions if the database crashes, I think that using asynchronous commit could be a good fit.
My only concern is that I don't want a delay after commit until other transactions/queries see my commit. I am using the default isolation level "Read committed".
So my question is: Does using asynchronous commit in Postgresql in any way mean that there will be delays before other transactions see my committed data or before they proceed if they have been waiting for my transaction to complete? (As mentioned before, I don't care if there is a delay before my data is persisted to disk.)
It would seem that this is the behavior you're looking for.
http://www.postgresql.org/docs/current/static/wal-async-commit.html
Selecting asynchronous commit mode means that the server returns success as soon as the transaction is logically completed, before the
WAL records it generated have actually made their way to disk. This
can provide a significant boost in throughput for small transactions.
The WAL is used to provide on-disk data integrity, and has nothing to do about table integrity of a running server; it's only important if the server crashes. Since they specifically mention "as soon as the transaction is logically completed", the documention is indicating that this does not affect table behavior.
MSDN describes the JET transaction isolation for its OLEDB provider as follows:
Jet supports five levels of nesting in transactions. The only
supported mode for transactions is Read Committed. Setting lesser
levels of transactional separation implies Read Committed. Setting
higher levels will cause StartTransaction to fail.
Jet supports only single-phase commit.
MSDN describes Read Committed as follows:
Specifies that shared locks are held while the data is being read to
avoid dirty reads, but the data can be changed before the end of the
transaction, resulting in nonrepeatable reads or phantom data. This
option is the SQL Server default.
My questions are:
What is single-phase commit? What consequence does this have for transactions and isolation?
Would the Read Committed isolation level as described above be suitable for my requirements here?
What is the best way to achieve a Serializable transaction isolation using Jet?
By question number:
Single-phase commit is used where all of your data is in one database -- the activity of the transaction is committed atomically and you're done. If you have a logical transaction which needs to be spread across multiple storage engines (like a relational database for metadata and some sort of document store for a big blob) you can use a transaction manager to coordinate the activities so that the work is persisted in both or neither, if both products support two phase commit. They are just telling you that they don't support two-phase commit, so the product is not suitable for distributed transactions.
Yes, if you check the condition in the UPDATE statement itself; otherwise you might have problems.
They seem to be suggesting that you can't.
As an aside, I worked for decades as a consultant in quite a variety of environments. More than once I was engaged to migrate people off of Jet because of performance problems. In one case a simple "star" type query was running for two minutes because it was joining on the client rather than letting the database do it. As a direct query against the database it was sub-second. In another case there was a report which took 72 hours to run through Jet, which took 2 minutes when run directly against the database. If it generally works OK for you, you might be able to deal with such situations by using stored procedures where Jet is causing performance pain.
When you create a SQL Server transaction, what gets locked if you do not specify a table hint? In order to lock something, must you always use a table hint? Can you lock rows/tables outside of transactions (i.e. in ordinary queries)? I understand the concept of locking and why you'd want to use it, I'm just not sure about how to implement it in SQL Server, any advice appreciated.
You should use query hints only as a last resort, and even then only after expert analysis. In some cases, they will cause a query to perform badly. So, unless you really know what you are doing, avoid using query hints.
Locking (of various types) happens automatically everytime you perform a query (unless NOLOCK is specified). The default Transaction Isolation level is READ COMMITTED
What are you actually trying to do?
Understanding Locking in SQL Server
"Can you lock rows/tables outside of transactions (i.e. in ordinary queries)?"
You'd better understand that there are no ordinary queries or actions in SQL Server, they are ALL, without any exceptions, transactional. This is how ACID-ness is achieved, see, for ex., [1]. If client tools or developer interactively do not specify transaction explicitly with BEGIN TRANSACTION and COMMIT/ROLLBACK, then implicit transactions are used.
Also, transaction is not synonym of locking/locks engagement. There is a plethora of mechanisms to control concurrency without locking (for example, versioning. etc.) as well as READ UNCOMMITTED transaction "isolation" (in this case, absence of any isolation) level does not control it at all.
Update2:
In order to lock something, must you always use a table hint?
As far as, transaction isolation level is not READ UNCOMMITTED or one of row-versioning (snapshot) isolation levels, for ex., default READ COMMITTED or set by, for ex.,
SET TRANSACTION ISOLATION LEVEL SERIALIZABLE
the locks are issued (I do not know where to start and how to end this topic[2]). Table hints, which can be used in statements, override these settings.
[1]
Paul S. Randal. Understanding Logging and Recovery in SQL Server
What is Logging?
http://technet.microsoft.com/en-us/magazine/2009.02.logging.aspx#id0060003
[2]
Insert trailing ) upon clicking this link
http://en.wikipedia.org/wiki/Isolation_(database_systems)
Update:
5 min ago I had reputation 784 (the same as 24h ago) and, now, without any visible downvotes, it dropped to 779.
Where can I ask this question if I am banned from meta.stackoverflow.com?
I understand that an Isolation level of Serializable is the most restrictive of all isolation levels. I'm curious though what sort of applications would require this level of isolation, or when I should consider using it?
Ask yourself the following question: Would it be bad if someone were to INSERT a new row into your data while your transaction is running? Would this interfere with your results in an unacceptable way? If so, use the SERIALIZABLE level.
From MSDN regarding SET TRANSACTION ISOLATION LEVEL:
SERIALIZABLE
Places a range lock on the data set,
preventing other users from updating
or inserting rows into the data set
until the transaction is complete.
This is the most restrictive of the
four isolation levels. Because
concurrency is lower, use this option
only when necessary. This option has
the same effect as setting HOLDLOCK on
all tables in all SELECT statements in
a transaction.
So your transaction maintains all locks throughout its lifetime-- even those normally discarded after use. This makes it appear that all transactions are running one at a time, hence the name SERIALIZABLE. Note from Wikipedia regarding isolation levels:
SERIALIZABLE
This isolation level specifies that
all transactions occur in a completely
isolated fashion; i.e., as if all
transactions in the system had
executed serially, one after the
other. The DBMS may execute two or
more transactions at the same time
only if the illusion of serial
execution can be maintained.
The SERIALIZABLE isolation level is the highest isolation level based on pessimistic concurrency control where transactions are completely isolated from one another.
The ANSI/ISO standard SQL 92 covers the following read phenomena when one transaction reads data, which is changed by second transaction:
dirty reads
non-repeatable reads
phantom reads
and Microsoft documentation extends with the following two:
lost updates
missing and double reads caused by row updates
The following table shows the concurrency side effects enabled by the different isolation levels:
So, the question is what read phenomena are allowed by your business requirements and then to check if your hardware environment can handle stricter concurrency control?
Note, something very interesting about the SERIALIZABLE isolation level - it is the default isolation level specified by the SQL standard. In the context of SQL Server of course, the default is READ COMMITTED.
Also, the official documentation about Transaction Locking and Row Versioning Guide is a great place where a lot of aspects are covered and explained.
Try accounting. Transactions in accounts are inherently serializable if you want to have proper account values AND adhere to things like credit limits.
It behaves in a way that when you try to update a row, It simply blocks the updation process until the transaction is completed.