I need to generate the progressive number of a invoices, avoiding gaps in the sequence:
At beginning I thought it was quite easy as
SELECT MAX(Docnumber)+1 as NewDocNumber
from InvoicesHeader
but since it takes some time to build the "insert into InvoiceHeader" query and another request could arrive, assigning to both Invoices the same NewDocNumber
I'm now thinking to avoid to generate the DocNumber in advanced and changed query to:
INSERT INTO InvoicesHeader (InvoiceID,..., DocNumber,...)
SELECT #InvoiceID,..., MAX(Docnumber)+1,... FROM InvoicesHeader
but although (it should) solve some problems, it is still thread unsafe and not suitable for race conditions:
adding TABLOCK or UPDLOCK, in this way:
BEGIN TRANSACTION TR1
INSERT INTO InvoicesHeader WITH (TABLOCK)
(InvoiceID,..., DocNumber,...)
SELECT #InvoiceID,..., MAX(Docnumber)+1,... FROM InvoicesHeader
COMMIT TRANSACTION TR1
Will solve the issue?
Or better to use ISOLATION LEVEL, NEXT VALUE FOR or other solution?
You already having thread safe generation of sequence generation in SQL Server. Read about Create Sequence. It is available starting from SQL Server 2012. It is better to use as sequence is generated outside the transaction scope.
Sequence numbers are generated outside the scope of the current
transaction. They are consumed whether the transaction using the
sequence number is committed or rolled back.
You can get next value from the sequence. We have been using sequences for generating order numbers and we have not found issues, when multiple order nubmers are generated in parallel.
SELECT NEXT VALUE FOR DocumentSequenceNumber;
Updated, based on comments, if you have four different documenttypes, I would suggest you to first generate sequence and then concatenate with a specific document type. It will be easier for you to understand. At the end of the year, you can restart the sequence using ALTER SEQUENCE
RESTART [ WITH ] The next value that will be returned by
the sequence object. If provided, the RESTART WITH value must be an
integer that is less than or equal to the maximum and greater than or
equal to the minimum value of the sequence object. If the WITH value
is omitted, the sequence numbering restarts based on the original
CREATE SEQUENCE options.
Related
I have an application that, when a new user is added to a location, it is assigned a sequential number. So, for example, the first user at Location 01 would be assigned 01-0001, the second user 01-0002, etc.
While it is simple enough for me to find the max user number at any time for that location and +1, my issue is that I need this to be thread/collision safe.
While its not super common, I don't want one query to find the max() number while another query is in the process of adding that number at that same moment. (it has happened before in my original app, though only twice in 5 years.)
What would be the best way to go about this? I would prefer not to rely on a unique constraint as that would just throw an error and force the process to try it all again.
You can use
BEGIN TRAN
SELECT #UserId = MAX(UserId)
FROM YourTable WITH(UPDLOCK, HOLDLOCK, ROWLOCK)
WHERE LocationId = #LocationId
--TODO: Increment it or initialise it if no users yet.
INSERT INTO YourTable (UserId, Name)
VALUES (#UserId, #Name)
COMMIT
Only one session at a time can hold an update lock on a resource so if two concurrent sessions try to insert a row for the same location one of them will be blocked until the other one commits. The HOLDLOCK is to give serializable semantics and block the range containing the max.
This is a potential bottleneck but this is by design because of the requirement that the numbers be sequential. Better performing alternatives such as IDENTITY do not guarantee this. In reality though it sounds as though your application is fairly low usage so this may not be a big problem.
Another possible issue is that the ID will be recycled if the user that is the current max for a location gets deleted but this will be the same for your existing application by the sounds of it so I assume this is either not a problem or just doesn't happen.
You can use a sequence object described here.
Create a new sequence is very simple, for example you can use this code
create sequence dbo.UserId as int
start with 1
increment by 1;
With sequence object you don't need to be aware about any collision. Sequence will return always next value in every time you get it with NEXT VALUE FOR statement, like in this code
select next value for dbo.UserId;
The next value will be return correctly even if your rollback transaction or even if you get next value in separate, paralel transactions.
Requirement:
To count the number of times a procedure has executed
From what I understand so far, sys.dm_exec_procedure_stats can be used for approximate count but that's only since the last service restart. I found this link on this website relevant but I need count to be precise and that should not flush off after the service restart.
Can I have some pointers on this, please?
Hack: The procedure I need to keep track of has a select statement so returns some rows that are stored in a permanent table called Results. The simplest solution I can think of is to create a column in Results table to keep track of the procedure execution, select the maximum value from this column before the insert and add one to it to increment the count. This solution seems quite stupid to me as well but the best I could think of.
What I thought is you could create a sequence object, assuming you're on SQL Server 2012 or newer.
CREATE SEQUENCE ProcXXXCounter
AS int
START WITH 1
INCREMENT BY 1 ;
And then in the procedure fetch a value from it:
declare #CallCount int
select #CallCount = NEXT VALUE FOR ProcXXXCounter
There is of course a small overhead with this, but doesn't cause similar blocking issue that could happen with using a table because sequences are handled outside transaction.
Sequence parameters: https://msdn.microsoft.com/en-us/library/ff878091.aspx
The only way I can think of keeping track of number of executions even when the service has restarted , is to have a table in your database and insert a row to that table inside your procedure everytime it is executed.
Maybe add a datetime column as well to collect more info about the execution. And a column for user who executed etc..
This can be done, easily and without Enterprise Edition, by using extended events. The sqlserver.module_end event will fire, set your predicates correctly and use a histogram target.
http://sqlperformance.com/2014/06/extended-events/predicate-order-matters
https://technet.microsoft.com/en-us/library/ff878023(v=sql.110).aspx
To consume the value, query the histogram target (under the reviewing target output examples).
Most of my databases use IDENTITY columns as primary keys. I am writing a change log/audit trail in the database and want to use the ID BIGINT to keep trap of the changes sequentially.
While BIGINT is pretty big, it will run out of numbers one day and my design will fail to function properly at that point. I have been aware of this problem with my other ID columns and intended to eventually convert to GUIDs/UUIDs as I have used on Postgres in the past.
GUIDs take 16 bytes and BIGINT takes 8. For my current task, I would like to stay with BIGINT for the space savings and the sequencing. Under Postgres, I created a custom sequence with the first two digits as the current year and a fixed number of digits as the sequence within the year. The sequence generator automatically reset the sequence when the year changed.
SQL Server 2008 has no sequence generator. My research has turned up some ideas most of which involve using a table to maintain the sequence number, updating that within a transaction, and then using that to assign to my data in a separate transaction.
I want to write an SP or function that will update the sequence and return me the new value when called from a trigger on the target table before a row is written. There are many ideas but all seem to talk about locking issues and isolation problems.
Does anyone have a suggestion on how to automate this ID assignment, protect the process from assigning duplicates in a concurrent write, and prevent lock latency issues?
The stored procedure is prone to issues like blocking and deadlocks. However, there are ways around that.
For now, why not start the ID off at the bottom of the negative range?
CREATE TABLE FOO
(
ID BIGINT IDENTITY(-9223372036854775808, 1)
)
That gives you a range from -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807.
Are you really going to eat up 2^63 + 2^63 numbers?
If you are still committed to the other solution, I can give you a piece of working code. However, application locks and Serializable isolation has to be used.
It is still prone to timeouts or blocking depending upon the timeout setting and the server load.
In short, 2012 introduced sequences. That is basically what you want.
I have a table called ticket, and it has a field called number and a foreign key called client that needs to work much like an auto-field (incrementing by 1 for each new record), except that the client chain needs to be able specify the starting number. This isn't a unique field because multiple clients will undoubtedly use the same numbers (e.g. start at 1001). In my application I'm fetching the row with the highest number, and using that number + 1 to make the next record's number. This all takes place inside of a single transaction (the fetching and the saving of the new record). Is it true that I won't have to worry about a ticket ever getting an incorrect (duplicate) number under a high load situation, or will the transaction protect from that possibility? (note: I'm using PostgreSQL 9.x)
without locking the whole table on every insert/update, no. The way transactions work on PostgreSQL means that new rows that appear as a result of concurrent transactions never conflict with each other; and thats exactly what would be happening.
You need to make sure that updates actually cause the same rows to conflict. You would basically need to implement something similar to the mechanic used by PostgreSQL's native sequences.
What I would do is add another column to the table referenced by your client column to represent the last_val of the sequence's you'll be using. So each transaction would look sort of like this:
BEGIN;
SET TRANSACTION SERIALIZABLE;
UPDATE clients
SET footable_last_val = footable_last_val + 1
WHERE clients.id = :client_id;
INSERT INTO footable(somecol, client_id, number)
VALUES (:somevalue,
:client_id,
(SELECT footable_last_val
FROM clients
WHERE clients.id = :client_id));
COMMIT;
So that the first update into the clients table fails due to a version conflict before reaching the insert.
You do have to worry about duplicate numbers.
The typical problematic scenario is: transaction T1 reads N, and creates a new row with N+1. But before T1 commits, another transaction T2 sees N as the max for this client and creates another new row with N+1 => conflict.
There are many ways to avoid this; here is a simple piece of plpgsql code that implements one method, assuming a unique index on (client,number). The solution is to let the inserts run concurrently but in the event of a unique index violation, retry with refreshed values until it's accepted (it's not a busy loop, though, since concurrent inserts are blocked until other transactions commit)
do
$$
begin
loop
BEGIN
-- client number is assumed to be 1234 for the sake of simplicity
insert into the_table(client,number)
select 1234, 1+coalesce(max(number),0) from the_table where client=1234;
exit;
EXCEPTION
when unique_violation then -- nothing (keep looping)
END;
end loop;
end$$;
This example is a bit similar to the UPSERT implementation from the PG documentation.
It's easily transferable into a plpgsql function taking the client id as input.
In Oracle there is a mechanism to generate sequence numbers e.g.;
CREATE SEQUENCE supplier_seq
MINVALUE 1
MAXVALUE 999999999999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
And then execute the statement
supplier_seq.nextval
to retrieve the next sequence number.
How would you create the same functionality in MS SQL Server ?
Edit: I'm not looking for ways to automaticly generate keys for table records. I need to generate a unique value that I can use as an (logical) ID for a process. So I need the exact functionality that Oracle provides.
There is no exact match.
The equivalent is IDENTITY that you can set as a datatype while creating a table. SQLSERVER will automatically create a running sequence number during insert.
The last inserted value can be obtained by calling SCOPE_IDENTITY() or by consulting the system variable ##IDENTITY (as pointed out by Frans)
If you need the exact equivalent, you would need to create a table and then write a procedure to retun the next value and other operations. See Marks response on pitfalls on this.
Edit:
SQL Server has implemented the Sequence similar to the Oracle. Please refer to this question for more details.
How would you implement sequences in Microsoft SQL Server?
Identity is the best and most scalable solution, BUT, if you need a sequence that is not an incrementing int, like 00A, 00B, 00C, or some special sequence, there is a second-best method. If implemented correctly, it scales OK, but if implemented badly, it scales badly. I hesitate to recommend it, but what you do is:
You have to store the "next value" in a table. The table can be a simple, one row, one column table with just that value. If you have several sequences, they can share the table, but you might get less contention by having separate tables for each.
You need to write a single update statement that will increment that value by 1 interval. You can put the update in a stored proc to make it simple to use and prevent repeating it in code in different places.
Using the sequence correctly, so that it will scale reasonably (no, not as well as Identitiy :-) requires two things: a. the update statement has a special syntax made for this exact problem that will both increment and return the value in a single statement; b. you have to fetch the value from the custom sequence BEFORE the start of a transaction and outside the transaction scope. That is one reason Identity scales -- it returns a new value irrespective of transaction scope, for any attempted insert, but does not roll back on failure. That means that it won't block, and also means you'll have gaps for failed transactions.
The special update syntax varies a little by version, but the gist is that you do an assignment to a variable and the update in the same statement. For 2008, Itzik Ben-Gan has this neat solution: http://www.sqlmag.com/Articles/ArticleID/101339/101339.html?Ad=1
The old-school 2000 and later method looks like this:
UPDATE SequenceTable SET #localVar = value = value + 5
-- change the tail end to your increment logic
This will both increment and return you the next value.
If you absolutely cannot have gaps (resist that requirement :-) then it is technically possible to put that update or proc in side the rest of your trnsaction, but you take a BIG concurrency hit as every insert waits for the prior one to commit.
I can't take credit on this; I learned it all from Itzik.
make the field an Identity field. The field will get its value automatically. You can obtain the last inserted value by calling SCOPE_IDENTITY() or by consulting the system variable ##IDENTITY
The SCOPE_IDENTITY() function is preferred.
As DHeer said there is absolutely no exact match. If you try to build your own procedure to do this you will invariably stop your application from scaling.
Oracle's sequences are highly scalable.
OK, I take it back slightly. If you're really willing to focus on concurrency and you're willing to take numbers out of order as is possible with a sequence, you have a chance. But since you seem rather unfamiliar with t-sql to begin with, I would start to look for some other options when (porting an Oracle app to MSSS - is that what you're doing)
For instance, just generate a GUID in the "nextval" function. That would scale.
Oh and DO NOT use a table for all the values, just to persist your max value in the cache. You'd have to lock it to ensure you give unique values and this is where you'll stop scaling. You'll have to figure out if there's a way to cache values in memory and programmatic access to some sort of lightweight locks- memory locks, not table locks.
I wish that SQL Server had this feature. It would make so many things easier.
Here is how I have gotten around this.
Create a table called tblIdentities. In this table put a row with your min and max values and how often the Sequence number should be reset. Also put the name of a new table (call it tblMySeqNum). Doing this makes adding more Sequence Number generators later fairly easy.
tblMySeqNum has two columns. ID (which is an int identity) and InsertDate (which is a date time column with a default value of GetDate()).
When you need a new seq num, call a sproc that inserts into this table and use SCOPE_IDENTITY() to get the identity created. Make sure you have not exceeded the max in tblIdentities. If you have then return an error. If not return your Sequence Number.
Now, to reset and clean up. Have a job that runs as regularly as needed that checks all the tables listed in tblIdentites (just one for now) to see if they need to be reset. If they have hit the reset value or time, then call DBCC IDENT RESEED on the name of the table listed in the row (tblMySeqNum in this example). This is also a good time to clear our the extra rows that you don't really need in that table.
DON'T do the cleanup or reseeding in your sproc that gets the identity. If you do then your sequence number generator will not scale well at all.
As I said, it would make so many things easier of this feature was in SQL Server, but I have found that this work around functions fairly well.
Vaccano
If you are able to update to SQL Server 2012 you can use SEQUENCE objects. Even SQL Server 2012 Express has support for sequences.
CREATE SEQUENCE supplier_seq
AS DECIMAL(38)
MINVALUE 1
MAXVALUE 999999999999999999999999999
START WITH 1
INCREMENT BY 1
CACHE 20;
SELECT NEXT VALUE FOR supplier_seq
SELECT NEXT VALUE FOR supplier_seq
SELECT NEXT VALUE FOR supplier_seq
SELECT NEXT VALUE FOR supplier_seq
SELECT NEXT VALUE FOR supplier_seq
Results in:
---------------------------------------
1
(1 row(s) affected)
---------------------------------------
2
(1 row(s) affected)
---------------------------------------
3
(1 row(s) affected)
---------------------------------------
4
(1 row(s) affected)
---------------------------------------
5
(1 row(s) affected)
Just take care to specify the right data type. If I hadn't specified it the MAXVALUE you've provided wouldn't be accepted, that's why I've used DECIMAL with the highest precision possible.
More on SEQUENCES here: http://msdn.microsoft.com/en-us/library/ff878091.aspx
This might have already been answered a long time ago... but from SQL 2005 onwards you can use the ROW_NUMBER function... an example would be:
select ROW_NUMBER() OVER (ORDER BY productID) as DynamicRowNumber, xxxxxx,xxxxx
The OVER statement uses the ORDER BY for the unique primary key in my case...
Hope this helps... no more temp tables, or strange joins!!
Not really an answer, but it looks like sequences are coming to SQLServer in 2012.
http://www.sql-server-performance.com/2011/sequence-sql-server-2011/
Not an exact answer but addition to some existing answers
SCOPE_IDENTITY (Transact-SQL)
SCOPE_IDENTITY, IDENT_CURRENT, and ##IDENTITY are similar functions
because they return values that are inserted into identity columns.
IDENT_CURRENT is not limited by scope and session; it is limited to a
specified table. IDENT_CURRENT returns the value generated for a
specific table in any session and any scope. For more information, see
IDENT_CURRENT (Transact-SQL).
It means two different sessions can have a same identity value or sequence number so to avoid this and get unique number for all sessions use IDENT_CURRENT
Exactly because of this
IDENT_CURRENT is not limited by scope and session; it is limited to a specified table.
we need to use SCOPE_IDENTITY() because scope identity will give us unique number generated in our session, and uniqueness is provided by identity itself.