How to reset Autoincremented Id when rollback occurs in sql - sql-server

When I try to insert a new row in the table, if there is any exception occurs in this transaction then data is rollback.
Now when a new entry is succesfully inserted next time, AutoIncrement id is updated with next value. Means there is Gap between two consequetive Unique Id in the table.
Is there any valid way to overcome this problem?
Thanks in advance

The answer has to be said - no.
The whole idea of IDENTITY columns is to not be meaningful, and to be transaction agnostic - so that the numbers can be dished out without care of other transactions rolling back or not. Imagine a 1000 insert per second system being held up for 10ms for each transaction (insert) to decide whether it will commit! (fyi 10ms * 100 = 1s)
Note: In SQL Server 2012 (latest SP/patch level at time of writing), there is a "feature" noted here on Connect related to identities.
Also even prior to 2012, you don't even need to rollback to consume an IDENTITY value - see here https://stackoverflow.com/a/16156419/573261
This applies to other major RDBMS as well for the same reasons, e.g.
PostgreSQL sequences
Important: To avoid blocking concurrent transactions that obtain numbers from the same sequence, a nextval operation is never rolled back; that is, once a value has been fetched it is considered used, even if the transaction that did the nextval later aborts. This means that aborted transactions might leave unused "holes" in the sequence of assigned values.
(emphasis mine)

Related

SQL Server 2014: Thread safe generation of sequence numbers

I need to generate the progressive number of a invoices, avoiding gaps in the sequence:
At beginning I thought it was quite easy as
SELECT MAX(Docnumber)+1 as NewDocNumber
from InvoicesHeader
but since it takes some time to build the "insert into InvoiceHeader" query and another request could arrive, assigning to both Invoices the same NewDocNumber
I'm now thinking to avoid to generate the DocNumber in advanced and changed query to:
INSERT INTO InvoicesHeader (InvoiceID,..., DocNumber,...)
SELECT #InvoiceID,..., MAX(Docnumber)+1,... FROM InvoicesHeader
but although (it should) solve some problems, it is still thread unsafe and not suitable for race conditions:
adding TABLOCK or UPDLOCK, in this way:
BEGIN TRANSACTION TR1
INSERT INTO InvoicesHeader WITH (TABLOCK)
(InvoiceID,..., DocNumber,...)
SELECT #InvoiceID,..., MAX(Docnumber)+1,... FROM InvoicesHeader
COMMIT TRANSACTION TR1
Will solve the issue?
Or better to use ISOLATION LEVEL, NEXT VALUE FOR or other solution?
You already having thread safe generation of sequence generation in SQL Server. Read about Create Sequence. It is available starting from SQL Server 2012. It is better to use as sequence is generated outside the transaction scope.
Sequence numbers are generated outside the scope of the current
transaction. They are consumed whether the transaction using the
sequence number is committed or rolled back.
You can get next value from the sequence. We have been using sequences for generating order numbers and we have not found issues, when multiple order nubmers are generated in parallel.
SELECT NEXT VALUE FOR DocumentSequenceNumber;
Updated, based on comments, if you have four different documenttypes, I would suggest you to first generate sequence and then concatenate with a specific document type. It will be easier for you to understand. At the end of the year, you can restart the sequence using ALTER SEQUENCE
RESTART [ WITH ] The next value that will be returned by
the sequence object. If provided, the RESTART WITH value must be an
integer that is less than or equal to the maximum and greater than or
equal to the minimum value of the sequence object. If the WITH value
is omitted, the sequence numbering restarts based on the original
CREATE SEQUENCE options.

SQL Server 2008 custom sequence IDs

Most of my databases use IDENTITY columns as primary keys. I am writing a change log/audit trail in the database and want to use the ID BIGINT to keep trap of the changes sequentially.
While BIGINT is pretty big, it will run out of numbers one day and my design will fail to function properly at that point. I have been aware of this problem with my other ID columns and intended to eventually convert to GUIDs/UUIDs as I have used on Postgres in the past.
GUIDs take 16 bytes and BIGINT takes 8. For my current task, I would like to stay with BIGINT for the space savings and the sequencing. Under Postgres, I created a custom sequence with the first two digits as the current year and a fixed number of digits as the sequence within the year. The sequence generator automatically reset the sequence when the year changed.
SQL Server 2008 has no sequence generator. My research has turned up some ideas most of which involve using a table to maintain the sequence number, updating that within a transaction, and then using that to assign to my data in a separate transaction.
I want to write an SP or function that will update the sequence and return me the new value when called from a trigger on the target table before a row is written. There are many ideas but all seem to talk about locking issues and isolation problems.
Does anyone have a suggestion on how to automate this ID assignment, protect the process from assigning duplicates in a concurrent write, and prevent lock latency issues?
The stored procedure is prone to issues like blocking and deadlocks. However, there are ways around that.
For now, why not start the ID off at the bottom of the negative range?
CREATE TABLE FOO
(
ID BIGINT IDENTITY(-9223372036854775808, 1)
)
That gives you a range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
Are you really going to eat up 2^63 + 2^63 numbers?
If you are still committed to the other solution, I can give you a piece of working code. However, application locks and Serializable isolation has to be used.
It is still prone to timeouts or blocking depending upon the timeout setting and the server load.
In short, 2012 introduced sequences. That is basically what you want.

When a transaction is rolled back in timestamp ordering protocol why is it given a new timestamp?

When a transaction is rolled back in timestamp ordering protocol, why is it given a new timestamp?
Why don`t we retain the old timestamp?
If you are talking of a scheduler whose operation is timestamp-based, and a rolled-back transaction were allowed to "re-enter the scheduling queue" with its 'old' timestamp, then the net effect might be that the scheduler immediately gives the highest priority to any request coming from that transaction, and the net effect of THAT might be that whatever problem caused that transaction to roll back, re-appears almost instantaneously, perhaps causing a new rollback, which causes a new "re-entering the schedule queue", etc. etc.
Or the net effect of that "immediately re-entering the queue" could be that all other transactions are stalled.
Think of a queue of persons in the post office, and there is someone with a request which cannot be served, and that person were allowed to immediately re-enter the queue at the front (instead of at the back). How long would it then take before it gets to be your turn ?
Because there could be other transactions that had committed with the new timestamp
Initial timestamp is at X
Transaction T1 starts
T1 allocates timestamp increments it to value to X+1
Transaction T2 starts
T2 allocates timestamp increments it to value to X+2
T2 commits
T1 rolls back
If T1 would rollback the timestamp to X then a third transaction would generate a conflict with T2's allocated value. Same goes for increment and sequences. If you need monolithic sequence values (no gaps) then the transactions have to serialize and this happens at the price of dismal performance.
In a timestamp ordering protocol, the timestamp assigned to the transaction when starting is used to identify potential conflicts with other transactions. These could be transactions that updated an object this transaction is trying to read or transactions that read the value this transaction is trying to overwrite. As a result, when a transaction is aborted and restarted (i.e. to maintain serializability), then all the operations of the transaction will be executed anew and this is the reason a new timestamp needs to be assigned.
From a theoretical perspective, rerunning the operations again while the transaction is still using the old timestamp would be incorrect & unsafe, since it would be reading/overwriting new values while thinking it's situated in an older moment in time. From a practical perspective, if the transaction keeps using the old timestamp, most likely it will keep aborting & restarting continuously, since it will keep conflicting with the same transactions again and again.

SQL Server - how to ensure identity fields increment correctly even in case of rollback

In SQL Server, if a transaction involving the inserting of a new row gets rolled back, a number is skipped in the identity field.
For example, if the highest ID in the Foos table is 99, then we try to insert a new Foo record but roll back, then ID 100 gets 'used up' and the next Foo row will be numbered 101.
Is there any way this behaviour can be changed so that identity fields are guaranteed to be sequential?
What you are after will never work with identity columns.
They are designed to "give out" and forget, by-design so that they don't cause waits or deadlocks etc. The property allows IDENTITY columns to be used as a sequence within a highly transactional system with no delay or bottlenecks.
To make sure that there are no gaps means that there is NO WAY to implement a 100-insert per second system because there would be a very long queue to figure out if the 1st insert was going to be rolled back.
For the same reason, you normally do not want this behaviour, nor such a number sequence for a high volume table. However, for very infrequent, single-process tables (such as invoice number by a single process monthly), it is acceptable to put a transaction around a MAX(number)+1 or similar query, e.g.
declare #next int
update sequence_for_tbl set #next=next=next+1
.. use #next
SQL Identity (autonumber) is Incremented Even with a Transaction Rollback

Where to find a log of identity specification skipping rows

I have a Microsoft SQl server 2005 database with a table whose primary key has the identity specification is set to yes to auto increment.
Recently the primary key skipped two numbers(which I understand is normal as they are not necessarily sequential).
However I would like to find out, if possible, why they skipped and when, i.e. if a stored procedure rollback occured and the primary key sequence didn't rollback or they were deleted somehow.
My question is, does Microsoft SQL server management studio have a designated area that stored records such as this, i.e. transaction logs etc for me to have a look at to try and determine why this skip happened.
Assuming you have an IDENTITY that increments by 1 each time, then you will see haps in your ids if a ROLLBACK occurs - in the event of a rollback, the IDENTITY value will not be reverted back so the ID value it assigned for that operation will essentially be lost. This is entirely normal behaviour.
Suppose you have a high volume of INSERTs, in the event of one insert failing/rolling back, then rolling back the IDENTITY value increment would be a nightmare as there could have been a whole swathe of INSERTs in the mean time.
Unless your code log it there is no record. So no: SQL Server does not take note of this.
A transaction rollback or an INSERT error (which is a single statement rollback) will generate gaps in the number sequence. SQL Server does not take note of this.

Resources