Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
I have a row version column in SQL Server table. At the time of update and insert command auto generate value automatically inserted/updated in this column. I need same in PostgreSQL. One option is to use function trigger with sequence generation but it may cause locking issue. What is the best practice/alternate in PostgreSQL.
The question is somewhat unclear. rowversion in SQL Server is only used as a concurrency token in optimistic concurrency scenarios. This is faster than a trigger that updates a LastModified timestamp or increments a stored column.
The equivalent in PostgreSQL is the system-provided xmin column:
xmin
The identity (transaction ID) of the inserting transaction for this row version. (A row version is an individual state of a row; each update of a row creates a new row version for the same logical row.)
Essentially, for a single row, xmin always changes after every modification, just like rowversion does. It's faster than a trigger too, since it requires no extra effort.
The NpgSQL provider for Entity Framework uses xmin as a concurrency token.
If you want to implement optimistic concurrency manually, read the xmin column in your SELECT statement and use that value in updates, eg:
SELECT xmin, ID, Name FROM sometable;
Which returns
xmin | ID | name
------+----+------
123 | 23 | Moo
And then
UPDATE sometable
SET name = 'Foo'
WHERE ID = 23 AND xmin = 123
If the row was modified by some other transaction, xmin won't match and no changes will be made. You can detect that by checking how many rows were changed using your provider's API. That's how rowversion works too.
Another possibility mentioned in the linked question is to use the RETURNING clause to return some value to the client. If no value is returned, the statement failed, eg:
UPDATE sometable
SET name = 'Foo'
WHERE ID = 23 AND xmin = 123
RETURNING 1
Not sure what "locking issue" you are talking about, but you can get something equivalent (assuming I understood the "row version" thing correctly) without a sequence:
create table some_table
(
.... columns of the table ...,
row_version bigint not null default 0
);
Then create a trigger function:
create function increment_row_version()
returns trigger
as
$$
begin
new.row_version := old.row_version + 1;
return new;
end;
$$
language plpgsql;
By using the old record, it's impossible to overwrite the row version value in an UPDATE statement.
And then create the trigger for every table where you need it:
create trigger update_row_version_trigger
before update on your_table_name_here
for each row
execute procedure increment_row_version();
If you also want to prevent inserting e.g. a higher number as the start number, you can extend the trigger to run on insert and update and in case of an insert assign the desired start value explicitly.
If you need a global value across all tables (rather than one number for each table as the above does), create a sequence and use nextval() inside the trigger rather than incrementing the value. And no, the use of a sequence will not cause "locking issues".
The trigger would then look like this:
create function increment_row_version()
returns trigger
as
$$
begin
new.row_version := nextval('global_row_version_sequence');
return new;
end;
$$
language plpgsql;
and can be used for both insert and update triggers.
Related
I have a column in DB table which has to be increment when let's say some item is selected. But it can be selected parallel and for any records it has to start from 0. My solution is to increment the value from DB procedure, but can I be sure that the first procedure manages to increment the value before another procedure want to load the value to increment? I mean:
t0 Value is 10
t1 Procedure1 valueToInc = Value
t2 Procedure2 valueToInc = Value
t3 Procedure1 valueToInc ++
t4 Procedure2 valueToInc ++
t5 Value = 11
t6 Value = 11
Value written back from Procedure1 is 11 but from Procedure2 is obviously also 11 (need to secure 12 there).
I have also checked identity (property) and sequence (Transact-SQL) but nothing seems to be suitable for me.
Edit
What I´m trying to solve is that I have a console application - TCP server and MSSQL database, where I have a User table. Each time the single user wants to login, I have to increment users loginCount field. Any parallelization here should not be possible or is manageable from code, I know, but it was told me that I have to hande parallel acces by database, so not just to use update query. I have it as job interview project...
I wanted to make understanding easier by my first explanation, but it won´t work.
You can just use
UPDATE Users
SET LoginCount = ISNULL(LoginCount,0) + 1
WHERE UserId = #UserId
This is entirely safe under conditions of concurrency.
Use a transaction with transaction isolation level equal to SERIALIZABLE.
SERIALIZABLE
Statements cannot read data that has been modified but not yet committed by other transactions.
No other transactions can modify data that has been read by the current transaction until the current transaction completes.
Other transactions cannot insert new rows with key values that would fall in the range of keys read by any statements in the current transaction until the current transaction completes.
Don't load the Value to increment it: increment it, then select it (within the transaction). This will lock the table/row (depending) from updates/selects of other transactions.
This is a general question about how to lock range of values (and nothing else!) when they are not exists in table yet. The trigger for the question was that I want to do "insert if not exists", I don't want to use MERGE because I need to support SQL Server 2005.
In the first connection I:
begin transaction
select data from a table using (SERIALIZABLE, ROWLOCK) + where clause to respecify range
wait...
In the second connection, I insert data to the table with values that do not match the where clause in the first connection
I would expect that the second connection won't be affected by the first one, but it finishes only after I commit (or rollback) the first connection's transaction.
What am I missing?
Here is my test code:
First create this table:
CREATE TABLE test
(
VALUE nvarchar(100)
)
Second, open new query window sql server managements studio and execute the following:
BEGIN TRANSACTION;
SELECT *
FROM test WITH (SERIALIZABLE,ROWLOCK)
WHERE value = N'a';
Third, open another new query window and execute the following:
INSERT INTO test VALUES (N'b');
Notice that the second query doesn't ends until the transaction in the first window ends
You are missing an index on VALUE.
Without that SQL Server has nothing to take a key range lock on and will lock the whole table in order to lock the range.
Even when the index is added however you will still encounter blocking with the scenario in your question. The RangeS-S lock doesn't lock the specific range given in your query. Instead it locks the range between the keys either side of the selected range.
When there are no such keys either side the range lock extends to infinity. You would need to add a value between a and b (for example aa) to prevent this happening in your test and the insert of b being blocked.
See Bonus Appendix: Range Locks in this article for more about this.
I would like to have two columns in my table to store the add-time and update-time. As the name suggests, the add-time is the time when a row was first added; the update-time is the last time a row was updated. I can implement first by defaulting value to GETDATE(). As for the second, #Jeremy suggested using triggers here:
On Update: Auto Update Date/Time Field
Is there any easier way?
If I implement a trigger, does that mean two UPDATE statements (or one INSERT and one UPDATE in case the row is just created) have to be executed?
Thanks.
EDIT: For the second part of the question, this is the trigger I have in my database:
CREATE TRIGGER [dbo].[TR_AddUpdateTime]
ON [dbo].[AddUpdateTime]
AFTER UPDATE
AS
BEGIN
-- SET NOCOUNT ON added to prevent extra result sets from
-- interfering with SELECT statements.
SET NOCOUNT ON;
-- Insert statements for trigger here
UPDATE r
SET UpdateTime = GETDATE()
FROM AddUpdateTime r
JOIN inserted i
ON i.Id = r.Id
END
Does this mean that an additional update statement will be executed whenever I make an update to AddUpdateTime table, or MSSQL is smart enough to recognise that I am updating the same record and save both changes at the same time?
Other ways:
Use a stored procedure to wrap the updates
You can do UPDATE MyTable SET ..., UpdatedWhen = DEFAULT...
You need an UPDATE trigger that itself has one more UPDATE. Using a default on the table means you don't need a trigger for INSERT
You could make sure all inserts and updates go through a stored procedure that inserts the time.
No, the insert trigger will modify the values so that it's only one statement.
Edit: For entity framework could you implement the OnSavingChanges event to insert the update-time field (see here)? This is moving the responsibility from the DB to the Code which you may or may not be comfortable with.
In entity framework, you can use the partial class to extend the business logic. In this case, you can use OnPropertyChanged to set the update-time to DateTime.Now. You can use this article on MSDN as a guidance.
1) "Auto update" and "triggers" doesn't really sound like the way to go.
2) SQL Server has a (relatively new) "merge" statement. But that doesn't really sound like what you're looking for, either.
3) Instead:
a) If primary key doesn't exist (if "new"), then INSERT. In this case, first time = last time = GETDATE().
b) Otherwise, if the primary key already exists, then UPDATE. Your update will update only the "last time" column (along with the rest of the fields you need to update for this record.
4) Perhaps you can wrap this logic in a stored procedure?
5) Again - the key is to update BOTH "first time" and "last time*, the FIRST TIME, and then update ONLY "last time" all SUBSEQUENT times.
They might be an easier way but using triggers will be more effective and will guarantee no mater how records inseted or updated (from .net code or direct table inserts/updates), those two fields are populated
To Gurantee that only one trigger get fired each time, combine insert and update trigger
CREATE TRIGGER <trigger name> ON TableA for INSERT,UPDATE
And do conditional checking to distinguish between two actions
IF UPDATE
I have an application that uses incident numbers (amongst other types of numbers). These numbers are stored in a table called "Number_Setup", which contains the current value of the counter.
When the app generates a new incident, it number_setup table and gets the required number counter row (counters can be reset daily, weekly, etc and are stored as int's). It then incremenets the counter and updates the row with the new value.
The application is multiuser (approximately 100 users at any one time, as well as sql jobs that run and grab 100's of incident records and request incident numbers for each). The incident table has some duplicate incident numbers where they should not be duplicate.
A stored proc is used to retrieve the next counter.
SELECT #Counter = counter, #ShareId=share_id, #Id=id
FROM Number_Setup
WHERE LinkTo_ID=#LinkToId
AND Counter_Type='I'
IF isnull(#ShareId,0) > 0
BEGIN
-- use parent counter
SELECT #Counter = counter, #ID=id
FROM Number_Setup
WHERE Id=#ShareID
END
SELECT #NewCounter = #Counter + 1
UPDATE Number_Setup SET Counter = #NewCounter
WHERE id=#Id
I've now surrounded that block with a transaction, but I'm not entirely sure it' will 100% fix the problem, as I think there's still shared locks, so the counter can be read anyway.
Perhaps I can check that the counter hasn't been updated, in the update statement
UPDATE Number_Setup SET Counter = #NewCounter
WHERE Counter = #Counter
IF ##ERROR = 0 AND ##ROWCOUNT > 0
COMMIT TRANSACTION
ELSE
ROLLBACK TRANSACTION
I'm sure this is a common problem with invoice numbers in financial apps etc.
I cannot put the logic in code either and use locking at that level.
I've also locked at HOLDLOCK but I'm not sure of it's application. Should it be put on the two SELECT statements?
How can I ensure no duplicates are created?
The trick is to do the counter update and read in a single atomic operation:
UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter
WHERE id=#Id;
This though does not assign the new counter to #NewCounter, but instead returns it as a result set to the client. If you have to assign it, use an intermediate table variable to output the new counter INTO:
declare #NewCounter int;
declare #tabCounter table (NewCounter int);
UPDATE Number_Setup SET Counter = Counter+1
OUTPUT INSERTED.Counter INTO #tabCounter (NewCounter)
WHERE id=#Id
SELECT #NewCounter = NewCounter FROM #tabCounter;
This solves the problem of making the Counter increment atomic. You still have other race conditions in your procedure because the LinkTo_Id and share_id can still be updated after the first select so you can increment the counter of the wrong link-to item, but that cannot be solved just from this code sample as it depends also on the code that actualy updates the shared_id and/or LinkTo_Id.
BTW you should get into the habbit of name your fields with consistent case. If they are named consistently then you must use the exact match case in T-SQL code. Your scripts run fine now just because you have a case insensitive collation server, if you deploy on a case sensitive collation server and your scripts don't match the exact case of the field/tables names errors will follow galore.
have you tried using GUIDs instead of autoincrements as your unique identifier?
If you have the ablity to modify your job that gets mutiple records, I would change the thinking so that your counter is an identity column. Then when you get the next record you can just do an insert and get the ##identity of the table. That would ensure that you get the biggest number. You would also have to do a dbccReseed to reset the counter instead of just updating the table when you want to reset the identity. The only issue is that you'd have to do 100 or so inserts as part of your sql job to get a group of identities. That may be too much overhead but using an identity column is a guarenteed way to get unique numbers.
I might be missing something, but it seems like you are trying to reinvent technology that has already been solved by most databases.
instead of reading and updating from the 'Counter' column in the Number_Setup table, why don't you just use an autoincrementing primary key for your counter? You'll never have a duplicate value for a primary key.
I would like to automatically increment a field named `incrementID' anytime any field in any row within the table named 'tb_users' is updated.
Currently I am doing it via the sql update statement.
i.e "UPDATE tb_users SET name = #name, incrementID = incrementID + 1 .....WHERE id = #id;
I'm wondering how I can do this automatically. for example, by changing the way sql server treats the field - kind of like the increment setting of 'Identity'.
Before I update a row, I wish to check whether the incrementID of the object to be updated is different to the incrementID of the row of the db.
Columns in the Table can have an Identity Specification set. Simply expand the node in the property window and fill in the details (Is Identity, Increment, Seed).
The IDENTITYCOL keyword can be used for operations on Identity Specifications.
This trigger should do the trick:
create trigger update_increment for update as
if not update(incrementID)
UPDATE tb_users SET incrementID = incrementID + 1
from inserted WHERE tb_users.id = inserted.id
You could use a trigger for this (if I've read you correctly and you want the value incremented each time you update the row).
If you just need to know that it changed, rather than specifically that this is a later version or how many changes there have been, consider using a rowversion column.