Our application runs alongside another application on a customers machine.
We have put some efforts regarding avoiding long-running locks in tempdb since this obviously affects concurrency badly.
The other application, however does things like:
begin transaction
create #Table(...);
insert into #Table(....) values(...);
operation_for_totally_six_seconds().
commit;
Since the operations take time, our application get stuck waiting for the locks aquired by the other application.
Now, I expect there to be a way to isolate my application from the other application by for example telling sql server to assign me another tempdb, but I have not found a way. Is this somehow possible or is the solution to install our database on another mssql-instance?
Regards,
Jens Nordenbro
long-running locks in tempdb since
this obviously affects concurrency
badly
That is actually not obvious at all. Long held locks are of importance only if you and the other application go after the same locks. The code sample you posted is perfectly legitimate. First of all #temp is a connection specific table that no other connection can even see it. But even if it would be global resource, it would belong to the other application and hence you would have no business acquiring locks on it.
As an exercise, open an SSMS query window and run this:
begin transaction;
create table #temp (a int);
Then opne a second query window and run the same. QED they don't block each other, despite creating the very same #temp table.
If tempdb is indeed a bottle neck you need to do some more investigation and find the actual resources that contention occurs on.
One option is to run your app in a different instance of sql server on the same machine. This way you would have your own tempdb.
Related
I have a long running stored procedure that is executed from IIS. On average this stored procedure takes between two and five minutes to complete because it is searching through a large dataset. (although it has take around 20 minutes in some cases)
Most of the time the stored procedure works fine but every now and then the SPIDS go into a sleeping state and never recover. The only solution I have found is to restart the SQL Server and re-run the stored procedure
The are no table inserts in the proc (only table variable inserts), and the other statements are selects on a large table.
I'm stuck for where to start debugging this issue. Any hints one what it might be or suggestions on tools that would help me find the issue would be most helpful
EDIT: More info added:
The actual issue is the proc doesn't return the resultset. My first thought was to look at the spids, they were sleeping but the cputime was still increasing
It's a .Net app so .Net Core 3.1 with ASP.NET Core and a Blazor UI. The libary used for db connection is System.data.SqlClient I believe System.data.SqlClient uses it's own custom driver. Calling code below:
The stored procedure doesn't return multiple result sets, however obviously different instances of the proc run at the same time.
No limits to connection pooling in IIS
#RichardWatts when you say " re-run the stored procedure" you mean that the same stored proc with the same parameter and data works once you restart SQL Server ?
If so look over your loc (sp_loc} inside your table probably another process loc some data and doesnt release it properly, specialy if you have transaction accessing the same tables.
What is your your isolation level on your connexion ? If you can, try to change it to READ UNCOMMITTED to see if that solve your problem.
as an alternate you can also add a WITH (NOLOCK) or (READUNCOMMITTED) to your sql command.
Know that you will need to hold query with a read uncommited or nolock if you have some modification on the structure of your table or index re construction for example or they will in turn block its execution
Nevertheless be cautious this solution depend on your environment, specially if your tables gots lots of update, delete, insert,... this kind of isolation can lead to a Dirty read and doesnt adress the root cause of your problem wich I would bet is uncomited transaction (good article that explain it)
Make also a DBCC CHECKTABLE just to be sure on this side
I joined a project a while ago, which is a a few web servers and a few backend servers.
They all do CRUD things on one database.
Unfortunately, a few tables fall into a deadlock situation for a while now. We can see those victim statements via SQL Server Management Studio and its extended events feature.
Primary keys and all the necessary indexes are set already. We even rebuilt them, alot of these had fragmentations over 50%.
Thing is, there is this one table we would like to switch to the isolation level called SNAPSHOT. I know this won't solve the deadlock situation at all hence I read that write statements might block each other.
One table contains logs (login of users, tasks started and ended on the backends, yadda yadda...), the other one contains all the processes, so the backends are selecting, inserting and updating (like setting the "running" field from 0 to 1 and vice versa). While the first one for logging reasons might be good for the snapshot level, I doubt it might be recommended for the process table, as far as I understood how the snapshot leveling is working. And I am also aware that rollbacks of transactions will block the tables during the rollback process anyway.
Even the sysobjects table is getting blocked sometimes when a table has to be dropped. And I must mention that the database is ridiculously large, like many many table.
What I would like to know is, if you guys ever switched from whatever isolation level to snapshot and what challenges you had to face, or even if you changed your mind when it came to deadlock prevention and tried a different approach, like hardware upgrade, etc...
I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.
I have two applications.
One inserts data into database continuously like it is having an infinity loop.
When the second application inserts data to same database and table what will happen.
If it waits till the other application to complete inserting which will handle this?
Or it will say it is busy?
Or code throws an exception?
SQL servers have something called a connection pool which means that more than once connection to the database can be made at any particular time, and that's where the easy bit ends.
If you were to for example connect to the database on two applications at the same time and insert data in to different tables from each application then the two could happily happen at the same time without issue.
If however those applications wanted to do something like edit the same row then there's an issue with "locking" ...
Essentially any operation on a SQL database requires "acquiring a lock" on a "set" or "row" or "cell" depending on the configuration of the server its hard to say what might happen in your case.
So the simple answer is:
Yes, SQL can make stuff happen (like inserts) at the same time but with some clauses.
And long answer ...
requires in depth knowledge of locking and your database and server configuration.
We are experiencing some very annoying deadlock situations in a production SQL Server 2000 database.
The main setup is the following:
SQL Server 2000 Enterprise Edition.
Server is coded in C++ using ATL OLE Database.
All database objects are being accessed trough stored procedures.
All UPDATE/INSERT stored procedures wrap their internal operations in a BEGIN TRANS ... COMMIT TRANS block.
I collected some initial traces with SQL Profiler following several articles on the Internet like this one (ignore it is referring to SQL Server 2005 tools, the same principles apply). From the traces it appears to be a deadlock between two UPDATE queries.
We have taken some measures that may have reduced the likelihood of the problem from happening as:
SELECT WITH (NOLOCK). We have changed all the SELECT queries in the stored procedures to use WITH (NOLOCK). We understand the implications of having dirty reads but the data being queried is not that important since we do a lot of automatic refreshes and under normal conditions the UI will have the right values.
READ UNCOMMITTED. We have changed the transaction isolation level on the server code to be READ UNCOMMITED.
Reduced transaction scope. We have reduced the time a transaction is being held in order to minimize the probabilities of a database deadlock to take place.
We are also questioning the fact that we have a transaction inside the majority of the stored procedures (BEGIN TRANS ... COMMIT TRANS block). In this situation my guess is that the transaction isolation level is SERIALIZABLE, right? And what about if we also have a transaction isolation level specified in the source code that calls the stored procedure, which one applies?
This is a processing intensive application and we are hitting the database a lot for reads (bigger percentage) and some writes.
If this were a SQL Server 2005 database I could go with Geoff Dalgas answer on an deadlock issue concerning Stack Overflow, if that is even applicable for the issue I am running into. But upgrading to SQL Server 2005 is not, at the present time, a viable option.
As these initial attempts failed my question is: How would you go from here? What steps would you take to reduce or even avoid the deadlock from happening, or what commands/tools should I use to better expose the problem?
A few comments:
The isolation level explicitly specified in your stored procedure overrides isolatlation level of the caller.
If sp_getapplock is available on 2000, I'd use it:
http://sqlblogcasts.com/blogs/tonyrogerson/archive/2006/06/30/855.aspx
In many cases serializable isolation level increases the chance you get a deadlock.
A good resource for 2000:
http://www.code-magazine.com/article.aspx?quickid=0309101&page=1
Also some of Bart Duncan's advice might be applicable:
http://blogs.msdn.com/bartd/archive/2006/09/09/747119.aspx
In addition to Alex's answer:
Eyeball the code to see if tables are being accessed in the same order. We did this recently and reordered code to alway to parent then child. The system had grown, code and features were more complex, more user: we simply started getting deadlocks.
- See if transactions can be shortened (eg start later, finish earlier, less processing)
Identify which code you'd like not to fail and use SET DEADLOCK PRIORITY LOW in the other
We've used this (SQL 2005 has more options here) to make sure that some code will never be deadlocked and sacrificed other code.
If you have SELECT at the start of the transaction to prepare some stuff, consider HOLDLOCK (maybe UPDLOCK) to keep this locked for the duration. We use this occasionally so stop writes on this table by other processes.
The reason for the deadlocks in my setup scenario was after all the indexes. We were using (generated by default) non clustered indexes for the primary keys of the tables. Changing to clustered indexes fixed the problem.
My guess would be that you are experiencing deadlocks, either:
Because your DML(Updates probably) statements are getting escalations to table-locks, or
Different stored procedures are accessing the same tables in transactions but in a different order.
To address this, I would first examine the stored procedures, and make sure the the modifications statements have the indexes that they need.
Note: this applies to both the target tables and the source tables (despite NOLOCK, an UPDATE's source tables will get locks also. Check the query plans for scans on user stored procedures. Unlike batch or bulk operations, most user queries & DMLs work on a small subsets of the table rows and so should not be locking the entire table.
Then secondly, I would check the stored procedures to ensure that all data access in a stored procedure is being done in a consistent order (Parent -> Child is usually preferred).