I have a purely academic question about SQLite databases.
I am using SQLite.net to use a database in my WinForm project, and as I was setting up a new table, I got to thinking about the maximum values of an ID column.
I use the IDENTITY for my [ID] column, which according to SQLite.net DataType Mappings, is equivalent to DbType.Int64. I normally start my ID columns at zero (with that row as a test record) and have the database auto-increment.
The maximum value (Int64.MaxValue) is 9,223,372,036,854,775,807. For my purposes, I'll never even scratch the surface on reaching that maximum, but what happens in a database that does? While trying to read up on this, I found that DB2 apparently "wraps" the value around to the negative value (-9,223,372,036,854,775,807) and increments from there, until the database can't insert rows because the ID column has to be unique.
Is this what happens in SQLite and/or other database engines?
I doubt anybody knows for sure, because if a million rows per second were being inserted, it would take about 292,471 years to reach the wrap-around-risk point -- and databases have been around for a tiny fraction of that time (actually, so has Homo Sapiens;-).
IDENTITY is not actually the proper way to auto-increment in SQLite. That will require you do the incrementing in the app layer. In the SQLite shell, try:
create table bar (id IDENTITY, name VARCHAR);
insert into bar (name) values ("John");
select * from bar;
You will see that id is simply null. SQLite does not give any special significance to IDENTITY, so it is basically an ordinary (untyped) column.
On the other hand, if you do:
create table baz (id INTEGER PRIMARY KEY, name VARCHAR);
insert into baz (name) values ("John");
select * from baz;
it will be 1 as I think you expect.
Note that there is also a INTEGER PRIMARY KEY AUTOINCREMENT. The basic difference is that AUTOINCREMENT ensures keys are never reused. So if you remove John, 1 will never be reused as a id. Either way, if you use PRIMARY KEY (with optional AUTOINCREMENT) and run out of ids, SQLite is supposed to fail with SQLITE_FULL, not wrap around.
By using IDENTITY, you do open the (probably irrelevant) likelihood that your app will incorrectly wrap around if the db were ever full. This is quite possible, because IDENTITY columns in SQLite can hold any value (including negative ints). Again, try:
insert into bar VALUES ("What the hell", "Bill");
insert into bar VALUES (-9, "Mary");
Both of those are completely valid. They would be valid for baz too. However, with baz you can avoid manually specifying id. That way, there will never be junk in your id column.
The documentation at http://www.sqlite.org/autoinc.html indicates that the ROWID will try to find an unused value via randomization once it reached its maximum number.
For AUTOINCREMENT it will fail with SQLITE_FULL on all attempts to insert into this table, once there was a maximum value in the table:
If the table has previously held a row with the largest possible ROWID, then new INSERTs are not allowed and any attempt to insert a new row will fail with an SQLITE_FULL error.
This is necessary, as the AUTOINCREMENT guarantees that the ID is monotonically increasing.
I can't speak to any specific DB2 implementation logic, but the "wrap around" behavior you describe is standard for numbers that implement signing via two's complement.
As for what would actually happen, that's completely up in the air as to how the database would handle it. The issue arises at the point in time of actually CREATING the id that's too large for the field, as it's unlikely that the engine internally uses a data type of more than 64 bits. At that point, it's anybody's guess...the internal language used to develop the engine could throw up, the number could silently wrap around and just cause a primary key violation (assuming that a conflicting ID existed), the world could come to an end due to your overflow, etc.
But pragmatically, Alex is correct. The theoretical limit on the number of rows involved here (assuming it's a one-id-per row and not any sort of cheater identity insert shenanigans) would basically render the situation moot, as by the time that you could conceivably enter that many rows at even a stupendous insertion rate we'll all dead anyway, so it doesn't matter :)
Related
On choosing GUID vs identity INT for primary key…
Is it true that an insert of a GUID is going to take longer because it has to search for the sequential place in the index to put that GUID? And that it can cause paging?
If this is true, maybe we should have used identity INT all along?
Thanks.
It is true that a newer GUID value can be lesser than a GUID previously inserted. Yes, enough of those can force the index to expand pages. An INT or BIGINT auto increment/identity column wouldn't run into that unless you inserted them lower identity manually with identity insert ON.
if you can't change from a GUID for some reason check out
NEWSEQUENTIALID()
http://msdn.microsoft.com/en-us/library/ms189786.aspx
It will find a greater than previously used GUID. The caveat is that the "great than" portion only holds true until machine restarts.
Brad
I think you mean CLUSTERED INDEX more so than PRIMARY KEY. Of course, ANY index will get fragmented rather quickly when using NEWID() as the values are not ever-increasing and hence cause page-splits.
I provided a rather detailed explanation of the effects of non-sequential INSERT operations here:
What is fragmenting my index on a table with stable traffic?
Yes, NEWSEQUENTIALID() is far better, at least in terms of GUIDs, in that it is sequential, BUT, it is not a guarantee against page splits. The issue is that the starting value is determined based on a value that gets reset when the server is restarted. Hence, after a restart, the new sequential range can start lower than the previous first record. The first INSERT at that point will cause a page split but from there should be fine.
Also, a UNIQUEIDENTIFIER is 16 bytes as compared to INT being 4 and even BIGINT being 8. That increased size makes each row take up more space and hence less rows fit on each 8k datapage. Allocating new pages takes time, especially if you need to grow the datafile to fit the new pages (and the larger the rows the faster the datafile fills). Hence, yes, you most likely should have gone with INT IDENTITY from the start.
In such cases that an external and/or portable identifier is needed (this is where GUIDs are very handy), then you should still start with an INT (or even BIGINT) IDENTITY field as the clustered PK and have another column be the GUID with a UNIQUE INDEX placed on it. This is known as an alternate key and works well in these situations.
I was trying to create an ID column in SQL server, VB.net that would provide a sequence of numbers for every new row created in a database. So I used the following technique to create the ID column.
select * from T_Users
ALTER TABLE T_Users
ADD User_ID INT NOT NULL IDENTITY(1,1) Primary Key
Then I registered few usernames into the database and it worked just fine. For example the first six rows would be 1,2,3,4,5,6. Then I registered 4 more users the NEXT day, but this time the ID numbers jumped from 6 to A very large number such as: 1,2,3,4,5,6,1002,1003,1004,1005. Then two days later, I registered two more users and the new rows read 3002,3004. So my question is why is it skipping such a large number every other day I register users. Is the technique I used to create the sequence wrong? If it is wrong can anyone please tell me how to do it right? Now as I was getting frustrated with the technique used above, alternatively I tried to use sequentially generated GUID values. The sequence of GUID values were generated fine. However, the only downside is, it generates a very long numbers (4 times the INT size). My question here is does using GUID have any significant advantage over INT?
Regards,
Upside of GUIDs:
GUIDs are good if you ever want offline clients to be able to create new records, as you will never get a primary key clash when the new records are synchronised back to the main database.
Downside of GUIDs:
GUIDS as primary keys can have an effect on the performance of the DB, because for a clustered primary key, the DB will want to keep the rows in order of the key values. But this means a lot of inserts between existing records, because the GUIDs will be random.
Using IDENTITY column doesn't suffer from this because the next record is guaranteed to have the highest value and so the row is just tacked on the end every time. No re-shuffle needs to happen.
There is a compromise which is to generate a pseudo-GUID which means you would expect a key clash every 70 years or so, but helps the indexing immensely.
The other downsides are that a) they do take up more storage space, and b) are a real pain to write SQL against, i.e. much easier to type UPDATE TABLE SET FIELD = 'value' where KEY = 50003 than UPDATE TABLE SET FIELD = 'value' where KEY = '{F820094C-A2A2-49cb-BDA7-549543BB4B2C}'
Your declaration of the IDENTITY column looks fine to me. The gaps in your key values are probably due to failed attempts to add a row. The IDENTITY value will be incremented but the row never gets committed. Don't let it bother you, it happens in practically every table.
EDIT:
This question covers what I was meaning by pseudo-GUID. INSERTs with sequential GUID key on clustered index not significantly faster
In SQL Server 2005+ you can use NEWSEQUENTIALID() to get a random value that is supposed to be greater than the previous ones. See here for more info http://technet.microsoft.com/en-us/library/ms189786%28v=sql.90%29.aspx
Is the technique I used to create the sequence wrong?
No. If anything your google skills are non-existing. A short look for "Sql server identity skipping values" will give you a TON of returns including:
SQL Server 2012 column identity increment jumping from 6 to 1000+ on 7th entry
and the canonical:
Why are there gaps in my IDENTITY column values?
You basically wrongly assume sql server will not optimize it's access for performance. Identity numbers are markers, nothing else, no assumption of having no gaps please.
In particular: SQL Server preallocates numbers in 1000 blocks and - if you restart the server (like on your workstation) the remainder is lost.
http://www.sqlserver-training.com/sequence-breaks-gap-in-numbers-after-restart-sql-server-gap-between-numbers-after-restarting-server/-
If you do a manual sqyuence instead (new nin sql server 2012) you can define the cache size for this (pregeneration) and set it to 1 - at the cost of slightly lower performance when you do a lot of inserts.
My question here is does using GUID have any significant advantage over INT?
Yes. You can have a lot more rows with GUID's than with int. For example, int32 is limited to about 2 billion rows. For some of us that is too low (I have tables in the 10 billion range) and even a 64 large int is limited. And a truly zetabyte database, you have to use a guid in sequence, self generated.
Any normal human does not see a difference as we all do not really deal with that many rows. And the larger size makes a lot of things slower (larger key size = larger space in indices = larger indices = more memory / io for the same operation). Plus even your sequential id will jump.
Why not just adjust your expectation to reality - identity is not meant to be without gaps - or use a sequence with cache 1.
How can I get last autoincrement value of specific table right after I open database? It's not last_insert_rowid() because there is no insertion transaction. In other words I want to know in advance which number autoincrement will choose when inserting new row for particular table.
It depends on how the autoincremented column has been defined.
If the column definition is INTEGER PRIMARY KEY AUTOINCREMENT, then SQLite will keep the largest ID in an internal table called sqlite_sequence.
If the column definition does NOT contain the keyword AUTOINCREMENT, SQLite will use its ‘regular’ routine to determine the new ID. From the documentation:
The usual algorithm is to give the newly created row a ROWID that is
one larger than the largest ROWID in the table prior to the insert. If
the table is initially empty, then a ROWID of 1 is used. If the
largest ROWID is equal to the largest possible integer
(9223372036854775807) then the database engine starts picking positive
candidate ROWIDs at random until it finds one that is not previously
used. If no unused ROWID can be found after a reasonable number of
attempts, the insert operation fails with an SQLITE_FULL error. If no
negative ROWID values are inserted explicitly, then automatically
generated ROWID values will always be greater than zero.
I remember reading that, for columns without AUTOINCREMENT, the only surefire way to determine the next ID is to VACUUM the database first; that will reset all ID counters to the largest existing ID for that table + 1. But I can’t find that quote anymore, so this may no longer be true.
That said, I agree with slash_rick_dot that fetching auto-incremented IDs beforehand is a bad idea, especially if there’s even a remote chance that another process might write to the database at the same time.
Different databases implement auto-increment differently. But as far as I know, none of them will answer the question you are asking.
The auto increment feature is intended to create a unique ID for a newly added table row. If a row hasn't been inserted yet, then the feature hasn't produced the id.
And it makes sense... If you did get the next auto increment number, what would you do with it? Likely the intent is to assign it as the primary key of the not-yet-inserted new row. But between the time you got the id, and the time you used it to insert the row, the database could have used that id to insert a row for another process.
Your choices are this: manage the creation of ids yourself, or wait until rows are inserted before using their auto-created identifiers.
I know that sqlite database automatically generates a unique id (autoincrement) for every record inserted.
Does anyone know if there is any possibility of running out of system unique IDs in sqlite3 database, while executing the replace query?
I mean that every piece of data in database has its own type. For example, system unique id is something like int. What would the database do with the next record, if it generates unique id equal to MAX_INT?
Thanks.
The maximum possible ID is 9223372036854775807 (2^63 - 1). In any case, if it can't find a new auto-increment primary key, it will return SQLITE_FULL.
There is one other important point. If you use INTEGER PRIMARY KEY, it can reuse keys that were in the table earlier, but have been deleted. If you use INTEGER PRIMARY KEY AUTOINCREMENT, it can never reuse an auto-increment key.
http://www.sqlite.org/autoinc.html
"If the largest ROWID is equal to the largest possible integer (9223372036854775807) then the database engine starts picking positive candidate ROWIDs at random until it finds one that is not previously used. If no unused ROWID can be found after a reasonable number of attempts, the insert operation fails with an SQLITE_FULL error. If no negative ROWID values are inserted explicitly, then automatically generated ROWID values will always be greater than zero."
Given that sqlite ids are 64-bit you could insert a new record every 100 milliseconds for the next 546 years before running out (I made up those numbers; but you get the idea).
I have use Identity on ID primary key.
And then I insert some data.
For example.
Data 1 -> Add Successful without error. ID 1
Data 2 -> Add Successful without error. ID 2
Data 3 -> Add Fail with error.
Data 4 -> Add Fail with error.
Data 5 -> Add Successful without error. ID 5
You can see that ID has jump from 2 to 5.
Why ?? How can solve this ??
Why would that be a problem ?
Normally, you'll use an identity in a primary key column. Then, this primary key is a surrogate key, which means that is has absolutely no business value / business meaning.
It is just an 'administrative' fact, which is necessary in order that the database can uniquely identify a record.
So, it doesn't matter what this value is; and it also doesn't matter that there are gaps. Why do you want them to be consecutive.
And, suppose that they are consecutive -that no gaps appear when an insert fails- what would you do when you delete a row, and insert one later on ? Would you fill in the gaps as well ?
this is by design, sql server first increments the counter and than tries to create row, if it fails transaction (there is implicit transactions always) is roll backed but auto increment value is not reused. this is by design and I would be very surprised to see that it can be avoided (eventually you could call some command and reset the value to current maximum). You can always use the trigger to generate this values, but this has performance implications, usually you should not care about the value of auto_increment its just an integer, you would have the same situation later in your application if th
If an insert failed, you can, for the next insert, use set identity_insert mytable on and calculate the next identity by hand, using max(myfield)+1. You might have concurrency issues though.
But this is a cludge. There's nothing wrong with gaps.
#Frederik answered most of it -- I would just add that you are mixing up primary keys and business keys. An invoice (or whatever) should be identified by an invoice number -- a business key which should have a UNIQUE column in the table. The primary key is here to identify a row in the table and should be used by the database (to join ..) and by DBAs only.
Exposing primary keys to business users will end up in trouble and the database will sooner or later lose referential integrity -- always does, people are creative.