Does the starting seed matter for sql server identity column - sql-server

Are there any technical considerations when setting the seed value for an identity column in SQL server?
This column will be the PK and marked as Identity auto increment.
The business does not want to start IDs at 1 because they will be publicly visible in query strings and displayed on several pages. Using a higher random number makes it look better, this all makes sense to me and I want to make sure I am not missing any technical considerations.
This is an example of what I am going to do
--Reset manufacturer id to a Random 5 digit number
DECLARE #newId INT = (SELECT FLOOR(RAND() * 100000) + 10000 % 10000)
DBCC CHECKIDENT (Manufacturer, RESEED, #newId );

I don't see any issue with starting at some random future number.

Summarizing some comments into an answer.
The starting Identity value needs to consider any values already in the table. Because the ID will auto increment it might collide with already existing values if the ID is seeded too low.
Large gaps may appear in the sequence of Ids. However unlikely, a high starting seed and many gaps might lead to exhaustion of the INT range for the column. Then the app goes boom!
Any indexes on the column shouldn't care if it starts at 1 or 99999.

Related

Circular IDENTITY numbers in column

Is it possible to force IDENTITY column to recalculate its seed property , when it reach maximum value for defined data type, to fill gaps in ID's.
Let say like this way, I have a column of TINYINT data type which can hold up values to maximum of 255. When column is filled with data to maximum ID possible, I delete one row from middle, let's say ID = 100.
Question is, can I force IDENTITY to fill that missing ID at the end?
You can reseed the IDENTITY (set a new seed), but it will NOT be able to magically find the missing values.....
The reseeded IDENTITY column will just keep handing out new values starting at the new seed - which means, at some point, sooner or later, collisions with already existing values will happen
Therefore, all in all, reseeding an IDENTITY really isn't a very good idea .... Just pick a data type large enough to handle your needs.
With a type INT, starting at 1, you get over 2 billion possible rows - that should be more than sufficient for the vast majority of cases. With BIGINT, you get roughly 922 quadrillion (922 with 15 zeros - 9'220'000 billions) - enough for you??
If you use an INT IDENTITY starting at 1, and you insert a row every second, you need 66.5 years before you hit the 2 billion limit ....
If you use a BIGINT IDENTITY starting at 1, and you insert one thousand rows every second, you need a mind-boggling 292 million years before you hit the 922 quadrillion limit ....
Read more about it (with all the options there are) in the MSDN Books Online.
Yes,you can do this:
set identity_insert on;
insert into table(id) value(100)
--set it off
set identity_insert off;
Identity Insert

Using INT or GUID as primary key

I was trying to create an ID column in SQL server, VB.net that would provide a sequence of numbers for every new row created in a database. So I used the following technique to create the ID column.
select * from T_Users
ALTER TABLE T_Users
ADD User_ID INT NOT NULL IDENTITY(1,1) Primary Key
Then I registered few usernames into the database and it worked just fine. For example the first six rows would be 1,2,3,4,5,6. Then I registered 4 more users the NEXT day, but this time the ID numbers jumped from 6 to A very large number such as: 1,2,3,4,5,6,1002,1003,1004,1005. Then two days later, I registered two more users and the new rows read 3002,3004. So my question is why is it skipping such a large number every other day I register users. Is the technique I used to create the sequence wrong? If it is wrong can anyone please tell me how to do it right? Now as I was getting frustrated with the technique used above, alternatively I tried to use sequentially generated GUID values. The sequence of GUID values were generated fine. However, the only downside is, it generates a very long numbers (4 times the INT size). My question here is does using GUID have any significant advantage over INT?
Regards,
Upside of GUIDs:
GUIDs are good if you ever want offline clients to be able to create new records, as you will never get a primary key clash when the new records are synchronised back to the main database.
Downside of GUIDs:
GUIDS as primary keys can have an effect on the performance of the DB, because for a clustered primary key, the DB will want to keep the rows in order of the key values. But this means a lot of inserts between existing records, because the GUIDs will be random.
Using IDENTITY column doesn't suffer from this because the next record is guaranteed to have the highest value and so the row is just tacked on the end every time. No re-shuffle needs to happen.
There is a compromise which is to generate a pseudo-GUID which means you would expect a key clash every 70 years or so, but helps the indexing immensely.
The other downsides are that a) they do take up more storage space, and b) are a real pain to write SQL against, i.e. much easier to type UPDATE TABLE SET FIELD = 'value' where KEY = 50003 than UPDATE TABLE SET FIELD = 'value' where KEY = '{F820094C-A2A2-49cb-BDA7-549543BB4B2C}'
Your declaration of the IDENTITY column looks fine to me. The gaps in your key values are probably due to failed attempts to add a row. The IDENTITY value will be incremented but the row never gets committed. Don't let it bother you, it happens in practically every table.
EDIT:
This question covers what I was meaning by pseudo-GUID. INSERTs with sequential GUID key on clustered index not significantly faster
In SQL Server 2005+ you can use NEWSEQUENTIALID() to get a random value that is supposed to be greater than the previous ones. See here for more info http://technet.microsoft.com/en-us/library/ms189786%28v=sql.90%29.aspx
Is the technique I used to create the sequence wrong?
No. If anything your google skills are non-existing. A short look for "Sql server identity skipping values" will give you a TON of returns including:
SQL Server 2012 column identity increment jumping from 6 to 1000+ on 7th entry
and the canonical:
Why are there gaps in my IDENTITY column values?
You basically wrongly assume sql server will not optimize it's access for performance. Identity numbers are markers, nothing else, no assumption of having no gaps please.
In particular: SQL Server preallocates numbers in 1000 blocks and - if you restart the server (like on your workstation) the remainder is lost.
http://www.sqlserver-training.com/sequence-breaks-gap-in-numbers-after-restart-sql-server-gap-between-numbers-after-restarting-server/-
If you do a manual sqyuence instead (new nin sql server 2012) you can define the cache size for this (pregeneration) and set it to 1 - at the cost of slightly lower performance when you do a lot of inserts.
My question here is does using GUID have any significant advantage over INT?
Yes. You can have a lot more rows with GUID's than with int. For example, int32 is limited to about 2 billion rows. For some of us that is too low (I have tables in the 10 billion range) and even a 64 large int is limited. And a truly zetabyte database, you have to use a guid in sequence, self generated.
Any normal human does not see a difference as we all do not really deal with that many rows. And the larger size makes a lot of things slower (larger key size = larger space in indices = larger indices = more memory / io for the same operation). Plus even your sequential id will jump.
Why not just adjust your expectation to reality - identity is not meant to be without gaps - or use a sequence with cache 1.

How can I get current autoincrement value

How can I get last autoincrement value of specific table right after I open database? It's not last_insert_rowid() because there is no insertion transaction. In other words I want to know in advance which number autoincrement will choose when inserting new row for particular table.
It depends on how the autoincremented column has been defined.
If the column definition is INTEGER PRIMARY KEY AUTOINCREMENT, then SQLite will keep the largest ID in an internal table called sqlite_sequence.
If the column definition does NOT contain the keyword AUTOINCREMENT, SQLite will use its ‘regular’ routine to determine the new ID. From the documentation:
The usual algorithm is to give the newly created row a ROWID that is
one larger than the largest ROWID in the table prior to the insert. If
the table is initially empty, then a ROWID of 1 is used. If the
largest ROWID is equal to the largest possible integer
(9223372036854775807) then the database engine starts picking positive
candidate ROWIDs at random until it finds one that is not previously
used. If no unused ROWID can be found after a reasonable number of
attempts, the insert operation fails with an SQLITE_FULL error. If no
negative ROWID values are inserted explicitly, then automatically
generated ROWID values will always be greater than zero.
I remember reading that, for columns without AUTOINCREMENT, the only surefire way to determine the next ID is to VACUUM the database first; that will reset all ID counters to the largest existing ID for that table + 1. But I can’t find that quote anymore, so this may no longer be true.
That said, I agree with slash_rick_dot that fetching auto-incremented IDs beforehand is a bad idea, especially if there’s even a remote chance that another process might write to the database at the same time.
Different databases implement auto-increment differently. But as far as I know, none of them will answer the question you are asking.
The auto increment feature is intended to create a unique ID for a newly added table row. If a row hasn't been inserted yet, then the feature hasn't produced the id.
And it makes sense... If you did get the next auto increment number, what would you do with it? Likely the intent is to assign it as the primary key of the not-yet-inserted new row. But between the time you got the id, and the time you used it to insert the row, the database could have used that id to insert a row for another process.
Your choices are this: manage the creation of ids yourself, or wait until rows are inserted before using their auto-created identifiers.

Concatenate bigint values in SQL Server

I have a table with 3 columns:
PersonId uniqueidentifier -- key
DeviceId uniqueidentifier -- key
Counter bigint
The counter comes in ascending value but sometimes has gaps. An example of the counter values is (1,2,3,1000,10000,10001,10002,...). The counter value is saved one at a time. If I insert one row per counter value, the table gets big very fast. I must keep the last 1000 counter values and can delete early values.
Is it possible to concatenate the counter values into 1 or a few rows in a varbinary(8000) type, and remove early values at the beginning of the binary as part of the insert operation? I would like help in writing this query. I prefer not to use varchar because each character would take up 2 bytes. There may be better way than I can envision. Any help is appreciated!
Why are you trying to do that versus using a table with PersonID, DeviceID, Counter, and allow only a certain number of Counters per PersonID and DeviceID pair?
If your goal is to save space, remember that varbinary(8000) is going to reserve 8000 bytes, allowing 1000 bigint values maximum, with no consideration of how many counters you have.
How likely is it that most of these PersonID and DeviceID pair will have 1000 counters?
In the end, you are just making it more complicated for yourself and harder to maintain by future employees but are you really saving space?
You are also going to have to add some transaction process which will take away more of your server's resources.
But to answer your question strictly: yes, it is possible. I guess a trigger or a process at the end of a sproc could handle what you are trying to do.
Maybe you can choose an in-between solution: Make each row have 10 or so columns of type bigint. That will cut the number of rows by ten and reduce per-row overhead by 10x also. Maybe that is enough for you.

Identity column maximum value in SQLite DBs

I have a purely academic question about SQLite databases.
I am using SQLite.net to use a database in my WinForm project, and as I was setting up a new table, I got to thinking about the maximum values of an ID column.
I use the IDENTITY for my [ID] column, which according to SQLite.net DataType Mappings, is equivalent to DbType.Int64. I normally start my ID columns at zero (with that row as a test record) and have the database auto-increment.
The maximum value (Int64.MaxValue) is 9,223,372,036,854,775,807. For my purposes, I'll never even scratch the surface on reaching that maximum, but what happens in a database that does? While trying to read up on this, I found that DB2 apparently "wraps" the value around to the negative value (-9,223,372,036,854,775,807) and increments from there, until the database can't insert rows because the ID column has to be unique.
Is this what happens in SQLite and/or other database engines?
I doubt anybody knows for sure, because if a million rows per second were being inserted, it would take about 292,471 years to reach the wrap-around-risk point -- and databases have been around for a tiny fraction of that time (actually, so has Homo Sapiens;-).
IDENTITY is not actually the proper way to auto-increment in SQLite. That will require you do the incrementing in the app layer. In the SQLite shell, try:
create table bar (id IDENTITY, name VARCHAR);
insert into bar (name) values ("John");
select * from bar;
You will see that id is simply null. SQLite does not give any special significance to IDENTITY, so it is basically an ordinary (untyped) column.
On the other hand, if you do:
create table baz (id INTEGER PRIMARY KEY, name VARCHAR);
insert into baz (name) values ("John");
select * from baz;
it will be 1 as I think you expect.
Note that there is also a INTEGER PRIMARY KEY AUTOINCREMENT. The basic difference is that AUTOINCREMENT ensures keys are never reused. So if you remove John, 1 will never be reused as a id. Either way, if you use PRIMARY KEY (with optional AUTOINCREMENT) and run out of ids, SQLite is supposed to fail with SQLITE_FULL, not wrap around.
By using IDENTITY, you do open the (probably irrelevant) likelihood that your app will incorrectly wrap around if the db were ever full. This is quite possible, because IDENTITY columns in SQLite can hold any value (including negative ints). Again, try:
insert into bar VALUES ("What the hell", "Bill");
insert into bar VALUES (-9, "Mary");
Both of those are completely valid. They would be valid for baz too. However, with baz you can avoid manually specifying id. That way, there will never be junk in your id column.
The documentation at http://www.sqlite.org/autoinc.html indicates that the ROWID will try to find an unused value via randomization once it reached its maximum number.
For AUTOINCREMENT it will fail with SQLITE_FULL on all attempts to insert into this table, once there was a maximum value in the table:
If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error.
This is necessary, as the AUTOINCREMENT guarantees that the ID is monotonically increasing.
I can't speak to any specific DB2 implementation logic, but the "wrap around" behavior you describe is standard for numbers that implement signing via two's complement.
As for what would actually happen, that's completely up in the air as to how the database would handle it. The issue arises at the point in time of actually CREATING the id that's too large for the field, as it's unlikely that the engine internally uses a data type of more than 64 bits. At that point, it's anybody's guess...the internal language used to develop the engine could throw up, the number could silently wrap around and just cause a primary key violation (assuming that a conflicting ID existed), the world could come to an end due to your overflow, etc.
But pragmatically, Alex is correct. The theoretical limit on the number of rows involved here (assuming it's a one-id-per row and not any sort of cheater identity insert shenanigans) would basically render the situation moot, as by the time that you could conceivably enter that many rows at even a stupendous insertion rate we'll all dead anyway, so it doesn't matter :)

Resources