Exclusive lock on table for Update - sql-server

It's more theoretical question but I need to do something with it.
I have web interface and SQL Server 2012 behind it. Which is given me a lot of problem on UPDATE
I have one table let's call it Contract which has 100+ columns.
When user from web interface is doing an UPDATE it's exclusively locking whole table instead of only updated row, so the other users can't do inserts or updates some times selects which sometimes is causing multiple deadlocks.
Usually update looks like
UPDATE Contract
set
param1=#1,
param2=#2,
param3=#3,
param4=#4,
.....
where id=#id
How to fix this lock or maybe how to tell to SQL Server lock only row on updates?

Related

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

Create audit table for a big table with a lot of columns in SQL Server

I know this question has been asked many times. My question here is I have a table which is around 8000 records but with around 25 columns. I would like to monitor any changes we make in this table. my server is only 2008.
We usually create an audit table for the specific table we monitor and record any changes into that using cursors as we usually have a lot of columns to monitor. But I don't want that this time!
Do you think instead of cursors, I can use a trigger to create a table called audit table XYZ and monitor changes in it having columns like field name, old value, new value, update_date, username?
Many thanks!
Short answer
Yes, absolutely use triggers over cursors. Cursors have a bad reputation for being misused and performing terribly, so where possible, avoid using them
Longer answer
If you have control over the application which is reading/writing to this table, consider have it build the queries for auditing instead. The thing to watch out for with an INSERT/UPDATE/DELETE trigger (which I assume is what you're going for) is that it's going to increase your write time for queries on that table, whereas writing the audit in its own query will avoid this (there is a caveat that I'll detail in the next paragraph). A consideration you also need to make is how much metadata the audit table needs to contain. For example, if your application requires users to log in, you may want to log their username to the audit table, which may not be available to a trigger. It all comes down to the purpose the audit table needs to serve for your application.
An advantage that triggers do have in this scenario is that they are bound to the same transaction as the underlying query. So if your INSERT/UPDATE/DELETE query fails and is rolled back, the audit rows which were created by the trigger will also be rolled back along with it, so you'll never end up with an audit entry for rows which never existed. If you favour writing your own audit queries over a trigger, you'll need to be careful to ensure that they are in the same transaction and get rolled back correctly in the event of an error

Database Engine Update Logic

When a record is updated in a SQL Server table, how does the db engine physically execute such a request: is it INSERT + DELETE or UPDATE operation?
As we know, the performance of a database and any statements depends on many variables. But I would like to know if some things can be generalized.
Is there a threshold (table size, query length, # records affected...) after which the database switches to one approach or the other upon UPDATEs?
If there are times when SQL Server is physically performing insert/delete when a logical update is requested, is there a system view or metric that would show this? - i.e, if there is a running total of all the inserts, updates and deletes that the database engine has performed since it was started, then I would be able to figure out how the database behaves after I issue a single UPDATE.
Is there any difference between the UPDATE statement's behavior depending on SQL Server version (2008, 2012...)
Many thanks.
Peter
UPDATE on base table without triggers is always physical UPDATE. SQL Server has no such threshold. You can look up usage statistics, for example, in sys.dm_db_index_usage_stats.
Update edits the existing row. If it was insert/delete, then you'd get update failures for duplicate keys.
Insert/Update/Delete also all can be discretely permissioned. So a user could update records, but not insert or delete, also leading to that not being the way it works.

Can table be locked in oracle?

I'm developing an web app and must send valid error messages to clients if something go wrong, if table is locked I must send error about it, can be table locked in oracle databse? if it can't I just wont implement this functionality.
Yes table can be locked in orcale. if two process tries to write (update or insert) in table and then neither commit or close connection. It will be locked.
You can replicate it by editing or running update query it with tool with autocommit off and don't run commit or rollback, you will get locked,, while you try to access same table from different tool or code.
If you have TOAD simply edit the row of table and don't save. Simultaneously try to update that table from you code.
However, you application has almost zero chance to lock the table, as you will be having some connection timeout after the your connection will be closed. But there is chance that someother process will lock the table.

How to avoid Table Locks while Inserting Data in table

In our application we are inserting records in 15 tables in a transaction from C# code.
For this we are creating one insert statement for each table, and append all in one query and use 'ExecuteNonQuery' to insert records in table. Because, we want insert to happen in all table and don't want any inconsistent data, we are using it in a transaction.
This functionality is written in a service and more than once service (same services, different installations) perform this task (inserting data into tables) concurrently.
These services are inserting totally different rows in tables and not in any way dependent.
But, when we are running these services we are getting deadlocks on these insert statements.
The code is like this:
Open DB Connection
Begin Transaction
Insert data in tables
Commit Transaction.
All services perform these steps on different set of data going in same 15 tables.
SQL Profiler trace suggests there are exclusive locks on these tables while insert.
Can you please suggest why it should be having table level locks while just firing insert statement on table and ending in deadlocks. And what is the best possible way to prevent it.
You do not get deadlocks just from locking tables, even exclusive locking. If the tables are locked the new inserts will just wait for the existing inserts to finish (assuming you are not using a no wait hint).
Deadlocks happen when you lock resources in such a way that the sql statements cannot continue. Look for unbounded sub selects or where clauses that are not specific enough in your insert statements.
Post your sql so we can see what you are doing.

Resources