Best way to lock entities between clients - database

We are re-writing an old Cobol application in Java EE.
The old Cobol is a full client application.
The client's requirement is to lock entities (e.g. a particular account) so that no one can access it or at least only in read-only. This is because some transactions might be long and we don't want users to enter a lot of data just to loose everything while updating.
Optimistic locking is not wanted.
Currently the requirement is implemented by creating locks in a file system with a lot of problems like concurrent access, no transactions. Not very Java EE compliant.
The lock should also tell the client that is locking an entity.
Any suggestion?

I would suggest using the database itself for locking instead of the file system approach.
In an application we implemented locking an an per entity basis by using a special table LOCK which had the fields ENTITY_TYPE, ENTITY_ID, USER_ID and VALID_TO. Locks created timed out after a certain time when the user did not do anything. This was to prevent locked entities which can never be edited by other users when a client closes the application or due to network errors etc.
Before allowing a user to edit an entity we checked / created a row in this table. On the UI the user had a button to lock an entity or an info box showing the user holding the lock if an entity was already locked.

Related

how does laravel handle locking and concurrent updates?

Grails, by default, uses optimistic locking. It maintains an update count, and it checks this and throws an exception (and rolls the second one back) if two people try to update the same record at the same time.
What is laravel's strategy for concurrent updates?
If the answer is nothing (i.e. overwrite), This would result in a broken application. E.g. if you have an api which happens to update a user's "last logged in" value, and you have a backend admin application which allows an administrator to say "ban" a user, then we could have the situation where the ban update is overwriten (and lost) by the api update. In this case we need to use pessimistic locking, which is not understood by many developers and can easily result in deadlocking or slowdowns. Or separate the tables into a lot of small tables, but this also has its issues.

Is keeping a lock on a record for a long period of time common practice with modern database systems?

Question : Is keeping a lock on a record for a long period of time common practice with modern database systems ?
My understanding is locking records in a database (optimistic or pessimistic) is usually for very short period of time during a transaction.
The software I'm working with right now keeps locks on records for long periods of time :
A lock is kept on the record of the logged in user (in the ACTIVE_USERS' table) for the whole time the user is logged in the software.
Let say USER A is working on a file. The record corresponding to the file is locked until USER A saves the file or exit the file. So if a colleague, USER B tries to work on the same file, a popup shows up saying 'You can't work on this file because USER A is working on it right now'.
The company I'm working for to implement compatibility with Microsoft SQL Server wants the changes to be minimal : so I need to implement such a locking mechanism. I've hacked something that is working on a minimal test project but I'm not sure it is up to the industry and MSSQL's standards ...
This is a bit long for a comment.
Using the database locking mechanism for this application-level locking seems unusual. Database locks could be on the row, page, or table level, and they also affect indexes, so there could be unexpected side effects. Obviously, a proliferation of locks also makes deadlocks much more likely.
Normally, application locks would be handled on the record level. Using flags (of some sort) in the record, the application would ensure that only one row would have access to the file.
I would say, it might work. But I would never design a system that way and I'd be wary of unexpected consequences.

Database- or user-level locks

I'm currently facing the following problem:
I have a C# .NET application connecting to a database (with the use of NHibernate). The application basically displays the database content and lets the user edit it. Since multiple instances of the application are running at the same time (on the same and on different workstations) i'm having concurrency problems as soon as two users modify the same record at the same time.
Currently I kind of solved the issues with optimistic locking. But this is not the perfect solution since one user still looses its changes.
Now i came up with the idea of having the application lock an entry every time it loads a new one from the database and release the lock as soon as the user switches to another entry. So basically all entries which are currently displayed to the user are locked in the database. If another user loads locked entries it will display them in a read-only mode.
Now to my actual question:
Is it a good idea to do the locking on database level? Which means i would open a new transaction every time a user loads a new entry and lock it. Or would it be better to do it through a "Lock Table" which holds for example a key to all locked entries in a table?
Thanks for your help!
Is it a good idea to do the locking on database level?
Yes, it is fine in some cases.
So basically all entries which are currently displayed to the user are
locked in the database.
...
Or would it be better to do it through a "Lock Table" which holds for example a key to all locked entries in a table?
So you lock a bunch of entries on page load? And when would you release them? What if the editing will take lots of time (e.g. had started editing entry and then went for a lunch)? What if user would close the page without editing all these locked entries, for how long entries would remain locked?
Pessimistic locking and "Lock Table" help to avoid some problems of optimistic locking but bring new.
Currently I kind of solved the issues with optimistic locking. But this is not the perfect solution since one user still looses its changes.
Can't agree that this is loosing, because in your case if validate and commit phases are performed as a single atomic operation then entry wouldn't be corrupted and only one transaction would be successful (let suppose it is the 1st), another would be rolled back (2nd).
According to NHibernate's Optimistic concurrency control
It will be atomic if only one of these database transactions (the last
one) stores the updated data, all others simply read data.
The only approach that is consistent with high concurrency and high
scalability is optimistic concurrency control with versioning.
NHibernate provides for three possible approaches to writing
application code that uses optimistic concurrency.
So the 2nd transaction would be gracefully rolled back and after that user could be notified that he has either to make new edit (new transaction) or skip this entry.
But everything depends on your business logic and requirements. If you don't have high contention for the data and thus there wouldn't be lots of collisions then I suggest you to use Optimistic locking.

How Does Odoo Handle Database Locking?

I know Odoo will not concurrently update a table row. However my understanding is that two people can be looking at the same record simultaneously in edit mode. When both users save the record is overwritten by whoever saved last (even if 1ms later). However this could mean that a user is updating the record based on data which may have in fact changed while they were editing the exact same record.
How can row level locking be enforced in Odoo?
How can you restrict a record from being opened in edit mode if someone else has the same record opened in edit mode?
Odoo doesn't have such functionality. You can write your own but it will be complicated.
In general. You can create table/model with Locks.
For example when user clicks edit button You could create lock in Locks table and when another user clicks edit on that document it will read table with locks. If there is a lock, exception pop out.
When user who created lock will save changes or on timeout, lock should be released but user waiting for his turn should only be able to save changes after reloading the page.
It's simple concept but general it's not easy to do.
Odoo is not meant by itself for colaboration of many users on one shared document. There are addons working with etherpad like in notes. You can use etherpad in critical fields of models which must be shared among several users at once.

Hibernate and multiple threads, synchronize changes between multiple users

I am using Hibernate in an Eclipse RAP application. I have database tables mapped to classes with Hibernate and these classes have properties that are fetched lazily (If these weren't fetched lazily then I would probably end up loading the whole database into memory on my first query). I do not synchronize database access so there are multiple Hibernate Sessions for the users and let the DBMS do the transaction isolation. This means different instances of fetched data will belong to different users. There are things that if a user changes those things, then I would like to update those across multiple users. Currently I was thinking about using Hibernate session.refresh(object) in these cases to refresh the data, but I'm unsure how this will impact performance when refreshing multiple objects or if it's the right way to go.
Hope my problem is clear. Is my approch to the problem OK or is it fundamentally flawed or am I missing something? Is there a general solution for this kind of problem?
I would appreciate any comments on this.
The general solution is
to have transactions as short as possible
to link the session lifecycle to the transaction lifecycle (this is the default: the session is closed when the transaction is committed or rolled back)
to use optimistic locking concurrency to avoid two transactions updating the same object at the same time.
If each transaction is very short and transaction A updates some object from O to O', then concurrent transaction B will only see O until it commits or rolls back, and any other transaction started after A will see O', because a new session starts with the transaction.
We maintain an application that does exactly what you are trying to accomplish. Yes, every session.refresh() will hit the database, but since all sessions will refresh the same row at the same time, the DB server will answer all of these queries from memory.
The only thing that you still need to solve is how to propagate the information that something has changed and needs reloading to all the other sessions, possibly even to sessions on a different host.
For our application, we have about 30 users on RCP and 10-100 users on RAP instances that all connect to the very same DB backend (though through pgpool). We use a small network service that every runtime connects to; when a transaction commits, the application tells this change service that "row id X of table T" has changed and this is then propagated to all other "change subscribers", even across JVMs.
But: make sure that session.refresh() is called within the Thread that belongs to that session, possibly the RAP-Display thread. Do not call refresh() from Jobs or other unrelated threads.
As long you don't have a large number of users updating big counts of rows in short time, I guess you won't have to worry about performance.

Resources