Do I need explicit locking when using atomic update - database

In my web application, I store a counter value in the database table, which I need to increment or reset at each transaction (which are highly concurrent). Do I need to explicitly lock the row to avoid lost updates? Read committed transaction isolation level is being used at the connection level. The following statement updates the counter
UPDATE Counter c SET value =
CASE
WHEN c.last_updated = SYSDATE THEN c.value+1
ELSE 1
END,
last_updated = SYSDATE
WHERE c.counter_id = 123;
The statement is atomic and read committed isolation level implicitly locks the rows for update statements, as far as I know. Does this render the use of explicit locking redundant in this case?

You're talking optimistic locking vs pessimistic locking ("explicit lock").
If you go with pessimistic locking, you're guaranteed to have no lost updates. However, the approach comes at a price:
It doesn't scale well - you're essentially serializing access to the row being updated, and if the client running the first transaction hangs for some reason - everyone is stuck.
Given the nature of the usually multi-tier web apps, it may be difficult (or impossible) to implement, as the explicit lock needs to be run in the same database connection as the update itself, which your middle tier may or may not guarantee.
So you can go with optimistic locking instead. Assume the following table:
create table t (key int, value int, version int);
insert into t (1, 1, 1);
Basically, the logic would be like this (PL/SQL code):
declare
current_version t.version%type;
current_value t.value%type;
new_value t.value%type;
begin
-- read current version of a row
select version, value
into current_version, current_value
from t where id = 1;
-- calculate new value; while we're doing this,
-- someone else may update the row, changing its version
new_value = func_calculate_new_value(current_value);
-- update the row...
update t
set
value = new_value,
version = version + 1
where 1 = 1
and id = 1
-- but ONLY if the version of the row is the one we read
-- otherwise there would be a lost update
and version = current_version
;
if sql%rowcount = 0 then
-- 0 updated rows means the version is different
-- we're not updating because we don't want lost updates
-- and we throw to let the caller know
raise_application_error(-20000, 'Row version has changed');
rollback;
end if;
end;

Related

Postgresql lock for insert only [duplicate]

This question already has an answer here:
Limit same data entries
(1 answer)
Closed 1 year ago.
I have two tables in postgresql:
table A: has the definitions for a certain object
table B: has the instances of the objects defined in table A
Table A has columns total_instances and per_player_instances, which can both be null, and which I need to prevent the instance count in table B from going above if they're set. The code handles most cases, but I was getting duplicates from concurrent inserts.
Table A does not need to be locked as it rarely changes, and if we do it we can do it in planned downtime when no inserts will be happening in table B.
I wrote a trigger to count the existing instances and return an error if the count has gone over, but I must have gotten the locking wrong as now I am getting deadlock_detected errors. My trigger function is this:
CREATE OR REPLACE FUNCTION ri_constraints_func() RETURNS trigger AS $$
DECLARE
max_total INTEGER := NULL;
max_per_player INTEGER := NULL;
total_count INTEGER := 0;
per_player_count INTEGER := 0;
BEGIN
-- prevent concurrent inserts from multiple transactions
SELECT INTO max_total, max_per_player
awardable_total, awardable_per_player
FROM t1
WHERE id = NEW.t1_id;
LOCK TABLE t2 IN EXCLUSIVE MODE;
IF max_total IS NOT NULL THEN
SELECT INTO total_count
count(1)
FROM t2
WHERE t1_id = NEW.t1_id;
IF total_count >= max_total THEN
RAISE EXCEPTION 'awardable_total_exceeded';
END IF;
END IF;
IF max_per_player IS NOT NULL THEN
SELECT INTO per_player_count
count(1)
FROM t2
WHERE t1_id = NEW.t1_id
AND awarded_to_player_id = NEW.awarded_to_player_id;
IF per_player_count >= max_per_player THEN
RAISE EXCEPTION 'awardable_per_player_exceeded';
END IF;
END IF;
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Basically what I need to do is prevent inserts into the table between counting the instances and doing the insert. I thought that using LOCK TABLE t2 IN EXCLUSIVE MODE; would accomplish that. I'm doing more research into table locking, but if anyone knows what locking level to use to accomplish this I'd appreciate hearing it.
Also, I'm not married to this particular approach, so if getting this working requires re-writing this function, I'm open to that too.
Postgresql version is 11.
I believe that what you are doing is unnecessary and it can probably be accomplished by using proper transaction management.
For the transaction that needs to look the entire table, set the isolation level to SERIALIZABLE. This will disallow ghosts which is why you are trying to lock the entire table.
You don't need to do locks manually.
This might help: https://youtu.be/oI2mxnZbSiA
I recommend you read about isolation levels in postgresql.

SQLite logical clock for row data like rowversion in SQL Server

Does SQLite have anything like SQL Server's rowversion column that will increment every time a row changes? Essentially, I want to have a logical clock for each of my tables that updates whenever a table updates. With this logical clock, my application can hold the version it most recently saw, and can only re-fetch if data has changed.
I could implement this with something like:
CREATE TRIGGER FOO_VERSION_INSERT_TRIGGER
AFTER INSERT ON FOO FOR EACH ROW
BEGIN
UPDATE CLOCKS
SET VERSION = (
SELECT IFNULL(MAX(VERSION), 0) + 1 FROM CLOCKS
)
WHERE TABLE_NAME = "FOO"
END
CREATE TRIGGER FOO_VERSION_UPDATE_TRIGGER
AFTER UPDATE ON FOO FOR EACH ROW
BEGIN
UPDATE CLOCKS
SET VERSION = (
SELECT IFNULL(MAX(VERSION), 0) + 1 FROM CLOCKS
)
WHERE TABLE_NAME = "FOO"
END
CREATE TRIGGER FOO_VERSION_DELETE_TRIGGER
AFTER INSERT ON FOO FOR EACH ROW
BEGIN
UPDATE CLOCKS
SET VERSION = (
SELECT IFNULL(MAX(VERSION), 0) + 1 FROM CLOCKS
)
WHERE TABLE_NAME = "FOO"
END
But this seems like something that should natively exist already.
You can use PRAGMA data_version to see if any other connection to the database modified it and your program thus needs to refresh data.
The integer values returned by two invocations of "PRAGMA data_version" from the same connection will be different if changes were committed to the database by any other connection in the interim.
It won't tell you which specific tables were changed, though.

SQL Server : incrementing non identity int column by procedure call

I have a column in DB table which has to be increment when let's say some item is selected. But it can be selected parallel and for any records it has to start from 0. My solution is to increment the value from DB procedure, but can I be sure that the first procedure manages to increment the value before another procedure want to load the value to increment? I mean:
t0 Value is 10
t1 Procedure1 valueToInc = Value
t2 Procedure2 valueToInc = Value
t3 Procedure1 valueToInc ++
t4 Procedure2 valueToInc ++
t5 Value = 11
t6 Value = 11
Value written back from Procedure1 is 11 but from Procedure2 is obviously also 11 (need to secure 12 there).
I have also checked identity (property) and sequence (Transact-SQL) but nothing seems to be suitable for me.
Edit
What I´m trying to solve is that I have a console application - TCP server and MSSQL database, where I have a User table. Each time the single user wants to login, I have to increment users loginCount field. Any parallelization here should not be possible or is manageable from code, I know, but it was told me that I have to hande parallel acces by database, so not just to use update query. I have it as job interview project...
I wanted to make understanding easier by my first explanation, but it won´t work.
You can just use
UPDATE Users
SET LoginCount = ISNULL(LoginCount,0) + 1
WHERE UserId = #UserId
This is entirely safe under conditions of concurrency.
Use a transaction with transaction isolation level equal to SERIALIZABLE.
SERIALIZABLE
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.
Don't load the Value to increment it: increment it, then select it (within the transaction). This will lock the table/row (depending) from updates/selects of other transactions.

max(column_name) is returning same value whne the load is heavy on database

The below query is continously hit and records are getting inserted in the table "TRANSACTION_MAIN" but #confrm which is a number more than max(TRN_CNFRM_NBR) is same for couple of transactions, This behaviour is seen only when the load is too high on the DataBase server. Any insights into this, Why this behaviour is being observed, what might be happening behind the scenes?
BEGIN TRANSACTION
DECLARE #Confrm as int;
SET #Confrm = (SELECT isnull
(MAX(CONVERT(int, TRN_CNFRM_NBR)),0)
FROM TRANSACTION_MAIN WHERE
TRN_UC_LOC = #1) + 1;
DECLARE #TMID as int;
INSERT INTO TRANSACTION_MAIN(
TRN_CNFRM_NBR
,TRN_UC_LOC
,TRN_STAT_ANID
,TRN_SRC_ANID
,PRSN_ANID
,TRN_DT
,TRN_ACTL_AMT
,TRN_MTHD
,TRN_IP_ADDR
,DSCT_CD
,TRN_PAID_AMT
,CSHR_ANID
,INVDEPTEQUIP_ANID
,TRN_CMNT
,CPN_DSCT_TOTAL
)
VALUES(#Confrm,#1,#2,#3,#4,#5,#6,#7,#8,#9,#10,#11,#12,#13,#14);
SET #TMID = ##IDENTITY;");
COMMIT TRANSACTION
Well, this isn't safe, so it shouldn't be a surprise that it doesn't work.
There's a few contributors to the final result. First, simultaneous selects for the max value will of course return the same value, because the "new" rows haven't been inserted yet. Second, depending on the transaction isolation level, the select doesn't see a row that has been inserted, but not committed yet.
As a quick fix, it should help to simply set the transaction isolation level higher. This will of course reduce your throughput and increase the risk of deadlocks, but at least it will be correct. Or, if you're on an SQL server high enough, use sequences. Or thread-safe CLR code.
And if you're stuck on an old SQL server and can't handle the higher transaction isolation, you can try implementing this using your own sequences, where incrementing the sequence is an atomic operation. There's a nice example in Microsoft SQL Server Migration Assistant for Oracle.
You have misplaced the addition of 1.
The Query should like this:
SET #Confrm = (
SELECT ISNULL(MAX(CONVERT(INT, TRN_CNFRM_NBR)), 0) + 1 -- you should add 1 here.
FROM TRANSACTION_MAIN
WHERE TRN_UC_LOC = #1
);

Ensuring unique numbers from a sql server database

I have an application that uses incident numbers (amongst other types of numbers). These numbers are stored in a table called "Number_Setup", which contains the current value of the counter.
When the app generates a new incident, it number_setup table and gets the required number counter row (counters can be reset daily, weekly, etc and are stored as int's). It then incremenets the counter and updates the row with the new value.
The application is multiuser (approximately 100 users at any one time, as well as sql jobs that run and grab 100's of incident records and request incident numbers for each). The incident table has some duplicate incident numbers where they should not be duplicate.
A stored proc is used to retrieve the next counter.
SELECT #Counter = counter, #ShareId=share_id, #Id=id
FROM Number_Setup
WHERE LinkTo_ID=#LinkToId
AND Counter_Type='I'
IF isnull(#ShareId,0) > 0
BEGIN
-- use parent counter
SELECT #Counter = counter, #ID=id
FROM Number_Setup
WHERE Id=#ShareID
END
SELECT #NewCounter = #Counter + 1
UPDATE Number_Setup SET Counter = #NewCounter
WHERE id=#Id
I've now surrounded that block with a transaction, but I'm not entirely sure it' will 100% fix the problem, as I think there's still shared locks, so the counter can be read anyway.
Perhaps I can check that the counter hasn't been updated, in the update statement
UPDATE Number_Setup SET Counter = #NewCounter
WHERE Counter = #Counter
IF ##ERROR = 0 AND ##ROWCOUNT > 0
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
I'm sure this is a common problem with invoice numbers in financial apps etc.
I cannot put the logic in code either and use locking at that level.
I've also locked at HOLDLOCK but I'm not sure of it's application. Should it be put on the two SELECT statements?
How can I ensure no duplicates are created?
The trick is to do the counter update and read in a single atomic operation:
UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter
WHERE id=#Id;
This though does not assign the new counter to #NewCounter, but instead returns it as a result set to the client. If you have to assign it, use an intermediate table variable to output the new counter INTO:
declare #NewCounter int;
declare #tabCounter table (NewCounter int);
UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter INTO #tabCounter (NewCounter)
WHERE id=#Id
SELECT #NewCounter = NewCounter FROM #tabCounter;
This solves the problem of making the Counter increment atomic. You still have other race conditions in your procedure because the LinkTo_Id and share_id can still be updated after the first select so you can increment the counter of the wrong link-to item, but that cannot be solved just from this code sample as it depends also on the code that actualy updates the shared_id and/or LinkTo_Id.
BTW you should get into the habbit of name your fields with consistent case. If they are named consistently then you must use the exact match case in T-SQL code. Your scripts run fine now just because you have a case insensitive collation server, if you deploy on a case sensitive collation server and your scripts don't match the exact case of the field/tables names errors will follow galore.
have you tried using GUIDs instead of autoincrements as your unique identifier?
If you have the ablity to modify your job that gets mutiple records, I would change the thinking so that your counter is an identity column. Then when you get the next record you can just do an insert and get the ##identity of the table. That would ensure that you get the biggest number. You would also have to do a dbccReseed to reset the counter instead of just updating the table when you want to reset the identity. The only issue is that you'd have to do 100 or so inserts as part of your sql job to get a group of identities. That may be too much overhead but using an identity column is a guarenteed way to get unique numbers.
I might be missing something, but it seems like you are trying to reinvent technology that has already been solved by most databases.
instead of reading and updating from the 'Counter' column in the Number_Setup table, why don't you just use an autoincrementing primary key for your counter? You'll never have a duplicate value for a primary key.

Resources