Can table be locked in oracle? - database

I'm developing an web app and must send valid error messages to clients if something go wrong, if table is locked I must send error about it, can be table locked in oracle databse? if it can't I just wont implement this functionality.

Yes table can be locked in orcale. if two process tries to write (update or insert) in table and then neither commit or close connection. It will be locked.
You can replicate it by editing or running update query it with tool with autocommit off and don't run commit or rollback, you will get locked,, while you try to access same table from different tool or code.
If you have TOAD simply edit the row of table and don't save. Simultaneously try to update that table from you code.
However, you application has almost zero chance to lock the table, as you will be having some connection timeout after the your connection will be closed. But there is chance that someother process will lock the table.

Related

When using a SQL Transaction, how long can I keep it open without causing a problem

I have an Web App hosted on Azure. It has a complicated signup form allowing users to rent lockers, add spouse memberships etc. I don't want to add records to the database until EVERYTHING on that page is completed and checks out. I am using a SQL Transaction so that I can add records to various tables and then roll them back if the user does not complete the entries properly, or simply exits the page. I don't want a bunch of orphaned records in my DB. All of the records that will eventually be added reference each other by the identity field on each table. So, if I don't add records to a table, I don't get an identity returned to reference in other tables.
At the start of the page, I open a SQL connection and associate it with a Transaction and I hold the transaction open until the end of the process. If all is well, I commit the transaction, send out emails etc.
I know best practice is to open and close a SQL connection as quickly as possible. I don't know of any other way to operate this page without opening a SQL connection and transaction and holding it open until the end of the process.
If I should not be doing it this way, how do others do it?
I see two questions here, one about how would I do it, and the other about the limits of the DB. Starting with the second, the timeout of a transaction depends on your connection string timeout. So if the connection is still alive, you can complete the commit or do the rollback.
about how to do it, I'd not do it that way. Linking a database critical lock process to user interaction is a really bad approach. You put the performance in your user's hands and also, you're assuming goog intentional clients, but you'll also have bad guys.
I'd store it locally in the web browser the information and if the process is complete, then send the information to the DB to commit it. So the final "POST" would create all the items, which is going to also take some time.
Another option if you want to keep it server side, a Redis server to cache the information and then, "move it" into the DB when the process is finished.

How is an update query on a locked database resolved in postgresql?

Using postgresql, if a user wants to add new data or update existing data in the database while it is locked, how is his transaction resolved? Lets consider this scenario and please correct me if my understanding is wrong at any point:
1. User 1 wants to batch update some records in the database.
2. The transaction from user 1 locks the database until all the updates are pushed.
3. User 2 wants to update something in the database, or insert some new data to the database while it is locked.
4. Based on what MVCC denotes, user 2 is shown the pre-lock version of the database.
5. User 2 inserts or updates the data.
6. User one finishes pushing its transaction and releases the database.
7. There are two versions of database now, the data is resolved.
How does the issue in step 7 get resolved? I read somewhere that it will take the data with the latest global time stamp. But how can I be sure that is the data it should keep? If the data from user 2 has priority over user 1, but the transaction from user 2 finished before user 1, how would this priority be resolved? Thank you for your help.
You cannot lock the database in PostgreSQL, but you can lock a table exclusively. This is something you should never do, because it is not necessary and hurts concurrency and system maintenance processes (autovacuum).
Every row you modify or delete will be automatically locked for the duration of the transaction. This does not affect the modification of other rows by concurrent sessions.
If a transaction tries to modify a row that is already locked by another transaction, the second transaction is blocked and has to wait until the first transaction is done. This way the scenario you describe is avoided.

Disable transactions on SQL Server

I need some light here. I am working with SQL Server 2008.
I have a database for my application. Each table has a trigger to stores all changes on another database (on the same server) on one unique table 'tbSysMasterLog'. Yes the log of the application its stored on another database.
Problem is, before any Insert/update/delete command on the application database, a transaction its started, and therefore, the table of the log database is locked until the transaction is committed or rolled back. So anyone else who tries to write in any another table of the application will be locked.
So...is there any way possible to disable transactions on a particular database or on a particular table?
You cannot turn off the log. Everything gets logged. You can set to "Simple" which will limit the amount of data saved after the records are committed.
" the table of the log database is locked": why that?
Normally you log changes by inserting records. The insert of records should not lock the complete table, normally there should not be any contention in insertion.
If you do more than inserts, perhaps you should consider changing that. Perhaps you should look at the indices defined on log, perhaps you can avoid some of them.
It sounds from the question that you have a create transaction at the start of your triggers, and that you are logging to the other database prior to the commit transaction.
Normally you do not need to have explicit transactions in SQL server.
If you do need explicit transactions. You could put the data to be logged into variables. Commit the transaction and then insert it into your log table.
Normally inserts are fast and can happen in parallel with out locking. There are certain things like identity columns that require order, but this is very lightweight structure they can be avoided by generating guids so inserts are non blocking, but for something like your log table a primary key identity column would give you a clear sequence that is probably helpful in working out the order.
Obviously if you log after the transaction, this may not be in the same order as the transactions occurred due to the different times that transactions take to commit.
We normally log into individual tables with a similar name to the master table e.g. FooHistory or AuditFoo
There are other options a very lightweight method is to use a trace, this is what is used for performance tuning and will give you a copy of every statement run on the database (including triggers), and you can log this to a different database server. It is a good idea to log to different server if you are doing a trace on a heavily used servers since the volume of data is massive if you are doing a trace across say 1,000 simultaneous sessions.
https://learn.microsoft.com/en-us/sql/tools/sql-server-profiler/save-trace-results-to-a-table-sql-server-profiler?view=sql-server-ver15
You can also trace to a file and then load it into a table, ( better performance), and script up starting stopping and loading traces.
The load on the server that is getting the trace log is minimal and I have never had a locking problem on the server receiving the trace, so I am pretty sure that you are doing something to cause the locks.

Inserted row is not accessible for another connection after transaction commit

We have a very weird problem using EF 6 with MSSQL and MassTransit with Rabbit MQ.
The scenario is as follows:
Client application inserts a row in database (via EF code first - implicit transaction only in DbContext SaveChanges)
Client application publishes Id of the row via MassTransit
Windows Service with consumers processes the message
Row is not found initially, after a few retries, the row appears
I always thought that after commit the row is persisted and becomes accessible for other connections...
We have ALLOW_SNAPSHOT_ISOLATION on in the database.
What is the reason of this and is there any way to be assured that the row is accessible before publishing the Id to MQ?
If you are dependent upon another transaction being completed before your event handler can continue, you need to make you read serializable. Otherwise, transactions are isolated from each other and the results of the write transaction are not yet available. Your write may also need to be serializable, depending upon how the query is structured.
Yes, the consumers run that quickly.

Exclusive lock on table for Update

It's more theoretical question but I need to do something with it.
I have web interface and SQL Server 2012 behind it. Which is given me a lot of problem on UPDATE
I have one table let's call it Contract which has 100+ columns.
When user from web interface is doing an UPDATE it's exclusively locking whole table instead of only updated row, so the other users can't do inserts or updates some times selects which sometimes is causing multiple deadlocks.
Usually update looks like
UPDATE Contract
set
param1=#1,
param2=#2,
param3=#3,
param4=#4,
.....
where id=#id
How to fix this lock or maybe how to tell to SQL Server lock only row on updates?

Resources