I know Odoo will not concurrently update a table row. However my understanding is that two people can be looking at the same record simultaneously in edit mode. When both users save the record is overwritten by whoever saved last (even if 1ms later). However this could mean that a user is updating the record based on data which may have in fact changed while they were editing the exact same record.
How can row level locking be enforced in Odoo?
How can you restrict a record from being opened in edit mode if someone else has the same record opened in edit mode?
Odoo doesn't have such functionality. You can write your own but it will be complicated.
In general. You can create table/model with Locks.
For example when user clicks edit button You could create lock in Locks table and when another user clicks edit on that document it will read table with locks. If there is a lock, exception pop out.
When user who created lock will save changes or on timeout, lock should be released but user waiting for his turn should only be able to save changes after reloading the page.
It's simple concept but general it's not easy to do.
Odoo is not meant by itself for colaboration of many users on one shared document. There are addons working with etherpad like in notes. You can use etherpad in critical fields of models which must be shared among several users at once.
Related
Currently I have a monolithic process written in C performing following set of operations
a. Accept data from user (in certain payload format).
IF the user has provided data first time validate if it is standard
compliant and store it in user data-store
ELSE compare the data provided by user previously which is stored in user
data-store and check if it is duplicate or not.
IF DUPLICATE Skip it
ELSE Validate the data provided by the user and update previous information
in user data-store.
b. Whenever user data-store is updated,
initialize analytics app to process the information stored in user
date-store and generate final output
Replicate the updates into backup user data-store for future reference.
Currently I am facing scale issues especially with the rate at which user keeps inputting data. As a result current application has the limitation of accepting only certain amount of user data requests with one monolithic process.
Adding pthreads would be the next step to scale the application which sometimes adds complexity to the codebase and proper care has to be taken with respect to locking for consistency.
I want to try out microservices approach where I want to divide the current monolithic process into multiple microservices as mentioned below so that I can improve each process separately improving its performance for future updates.
Process-01: Contains logic to accept data from user and does all standard verification's and update the shared data-store.
Process-02: whenever the shared data-store is updated, contains logic to process the data in it and create final output.
Process-03: Takes care of replicating share-datastore.
process-01 ==== (shared-datastore) === process-02
||
process-03
It seems I need a shared data-store which process-01 and process-02 would request the data provided by user for processing. Would this create a bottleneck as I need to create a copy of the requested as a message and share it with processes ? For process-02 I need to create the whole datastore as copy so that it can work on the data independently.
Ideally process-01 needs both read and write permissions while process-02 needs only read permission of the data in shared data-store.
Since we are using C for our application, please suggest if the planned new design has any more caveats? Is there any better design for the same ?
Even any other design suggestions would be of great help. Pointing to similar application design in C would also be helpful.
I have an issue with a simple access database. through debugging I have isolated the issue to the specific forms record lock setting. I have a split database and a simple form accessing the data. When two users are connected at the same time the second user to connect suffers serious performance issues on the form. It does not matter who connects first, It is always the second user that has this performance hit. If I change record locks on the form to no locks then there is no problem. I, however, require the record locks.
The performance hit is purely when editing a record. Not on locked records either, the locks work correctly, but when the second user edits any non-locked record the edits take forever to load. simply ticking a check box takes 1 minute to complete (during this one minute the screen freezes). For user number-one this remains instantaneous.
I am using Microsoft access 2016. But have the same issue when I use 2013.
Any help on this head scratcher would be much appreciated!
Tom
I'm currently facing the following problem:
I have a C# .NET application connecting to a database (with the use of NHibernate). The application basically displays the database content and lets the user edit it. Since multiple instances of the application are running at the same time (on the same and on different workstations) i'm having concurrency problems as soon as two users modify the same record at the same time.
Currently I kind of solved the issues with optimistic locking. But this is not the perfect solution since one user still looses its changes.
Now i came up with the idea of having the application lock an entry every time it loads a new one from the database and release the lock as soon as the user switches to another entry. So basically all entries which are currently displayed to the user are locked in the database. If another user loads locked entries it will display them in a read-only mode.
Now to my actual question:
Is it a good idea to do the locking on database level? Which means i would open a new transaction every time a user loads a new entry and lock it. Or would it be better to do it through a "Lock Table" which holds for example a key to all locked entries in a table?
Thanks for your help!
Is it a good idea to do the locking on database level?
Yes, it is fine in some cases.
So basically all entries which are currently displayed to the user are
locked in the database.
...
Or would it be better to do it through a "Lock Table" which holds for example a key to all locked entries in a table?
So you lock a bunch of entries on page load? And when would you release them? What if the editing will take lots of time (e.g. had started editing entry and then went for a lunch)? What if user would close the page without editing all these locked entries, for how long entries would remain locked?
Pessimistic locking and "Lock Table" help to avoid some problems of optimistic locking but bring new.
Currently I kind of solved the issues with optimistic locking. But this is not the perfect solution since one user still looses its changes.
Can't agree that this is loosing, because in your case if validate and commit phases are performed as a single atomic operation then entry wouldn't be corrupted and only one transaction would be successful (let suppose it is the 1st), another would be rolled back (2nd).
According to NHibernate's Optimistic concurrency control
It will be atomic if only one of these database transactions (the last
one) stores the updated data, all others simply read data.
The only approach that is consistent with high concurrency and high
scalability is optimistic concurrency control with versioning.
NHibernate provides for three possible approaches to writing
application code that uses optimistic concurrency.
So the 2nd transaction would be gracefully rolled back and after that user could be notified that he has either to make new edit (new transaction) or skip this entry.
But everything depends on your business logic and requirements. If you don't have high contention for the data and thus there wouldn't be lots of collisions then I suggest you to use Optimistic locking.
Apologies if a similar question has been addressed elsewhere but I'm struggling to find the obvious answer to my issue....
I have rolled out a split end database (.accdb created in Access 2013) to 6 members of my team by providing each with a copy of the front end which links to a back end on a shared network drive. Four of the users are opening the db through Access 2013, one through Access Runtime 2013 and one through Runtime 2010 (32 bit).
The primary job of the database is to allow users to allocate and manage tasks for a set of campaigns. The db centres around a task table which is updated via a bound form. When new task records are created, usually via a control from a parent 'campaign' form, some fields are pre-populated.
The (frequent) bug seems to occur when a two users are editing different task records via the task form at the same time. Occasionally, one of the task records becomes corrupted (hashed out or Chinese characters!) but more often one of the tasks becomes duplicated in place of the other. This then leads to duplicate task IDs and the loss of the primary key on this field.
I have tried setting record locking to both no locks (optimistic locking) - on users' access clients (except the Runtime versions where I can't see there is an option to do this) and on the task form itself - and edit record (pessimistic locking) using the setting in the task form properties.
I am having trouble diagnosing whether the error lies with locking and/or the point at which a record is saved (currently just on form close) or whether there is a bigger weakness in the set up. Does anyone have any ideas as to why this duplication and sometimes corruption might occur? Thanks
We are re-writing an old Cobol application in Java EE.
The old Cobol is a full client application.
The client's requirement is to lock entities (e.g. a particular account) so that no one can access it or at least only in read-only. This is because some transactions might be long and we don't want users to enter a lot of data just to loose everything while updating.
Optimistic locking is not wanted.
Currently the requirement is implemented by creating locks in a file system with a lot of problems like concurrent access, no transactions. Not very Java EE compliant.
The lock should also tell the client that is locking an entity.
Any suggestion?
I would suggest using the database itself for locking instead of the file system approach.
In an application we implemented locking an an per entity basis by using a special table LOCK which had the fields ENTITY_TYPE, ENTITY_ID, USER_ID and VALID_TO. Locks created timed out after a certain time when the user did not do anything. This was to prevent locked entities which can never be edited by other users when a client closes the application or due to network errors etc.
Before allowing a user to edit an entity we checked / created a row in this table. On the UI the user had a button to lock an entity or an info box showing the user holding the lock if an entity was already locked.