Suppose i have 100 rows in my table, my primary key is integer which is auto incrementing by 1 from 1. I consolidate my data and clear the table once row id reaches 100, will SQL server reuse the deleted primary keys ?
since primary key is incrementing by 1, if the answer of above question is no, what will happen to the next insert, once the primary key reached the biggest possible number an 'Int' can hold ?
MSSQL will not re-use the primary keys that have been deleted using DELETE (I'm assuming you are talking about the identity incrementation.) If you TRUNCATE the table it will reset the seed and reuse them.
If you go over the max for INT it will indeed just fail to make the next row. You can convert your INT column to BIGINT to avoid that.
BIGINT has a max of: 9,223,372,036,854,775,807 and INT has a cap of 2,147,483,647 but note.. you can also use those negative values too!
You can read about those caps here: https://learn.microsoft.com/fr-fr/sql/t-sql/data-types/int-bigint-smallint-and-tinyint-transact-sql?view=sql-server-2017
No, SQL Server won't reuse IDs. If you have 100 records with IDs 1 throught 100 and delete them all, next inserted row will have ID = 101.
But if you want to start from 1 after deleting, you can use this command:
DBCC CHECKIDENT('TableName', RESEED, 0)
It will reseed your identity and make it start from 1 all over again.
It depends on what you mean by
I...clear the table once row id reaches 100
If you're issuing a DELETE statement, then no, the identity column will issue 101 on the next insert.
If you're issuing a TRUNCATE TABLE statement, then yes, the table will reseed to 1 (or whatever your seed value is) on the next insert.
Related
So, I have a database with 10 users in it in which each one of them has an id from 1 to 10. If I delete those 10 users from the database and then create a new user, the new user will start with the id of 11 and not 1. Any Ideas?
Instead of using the DELETE statment (which will keep your auto incrementing indexes, and start them from the last INSERTED index) use TRUNCATE
TRUNCATE TABLE database_name.table_name;
TRUNCATE REFERENCE
Any AUTO_INCREMENT value is reset to its start value. This is true even for MyISAM and InnoDB, which normally do not reuse sequence values.
I created a table and inserted 4 rows into it. I ran the below query
SELECT seed_value as SeedValue, last_value as identityValue
FROM sys.identity_columns
WHERE object_id=OBJECT_ID('ALJtest1')
and got the result as
SeedValue| identityValue
-------------------------
1 | 4
Then I reseeded the table using
DBCC CHECKIDENT('DBO.ALJtest1', RESEED, 10)
When I ran the below query this time
SELECT seed_value as SeedValue, last_value as identityValue
FROM sys.identity_columns
WHERE object_id=OBJECT_ID('ALJtest1')
I got the result as
SeedValue| identityValue
-------------------------
1 | 10
Is there a way to find the last applied seed value on a table in SQL Server 2012?
RESEED, despite the name, doesn't change the identity's seed value, instead it simply sets the next identity value to generate. There is no way to change an identity column's actual seed value after it's created. From the documentation:
The seed value is the value inserted into an identity column for the
very first row loaded into the table. All subsequent rows contain the
current identity value plus the increment value where current identity
value is the last identity value generated for the table or view.
You cannot use DBCC CHECKIDENT to perform the following tasks:
Change the original seed value that was specified for an identity column when the table or view was created.
Reseed existing rows in a table or view.
To change the original seed value and reseed any existing rows, you
must drop the identity column and recreate it specifying the new seed
value. When the table contains data, the identity numbers are added to
the existing rows with the specified seed and increment values. The
order in which the rows are updated is not guaranteed.
So to answer your question: no, there is no way to know the last value specified in a DBCC CHECKIDENT(..., RESEED), because the current identity value may have already changed after inserts.
We have a production table with 770 million rows and change. We want(/need?) to change the Primary ID column from int to bigint to allow for future growth (and to avoid the sudden stop when the 32bit integer space is exhausted)
Experiments in DEV have shown that this is not as simple as altering the column as we would need to drop the index and then re-create it. So far in DEV (which is a bit humbler than PROD) the dropping of the index has not finished after 1 and a half hours. This table is hit 24/7 and having it offline for such a long time is not an option.
Has anyone else had to deal with a similar situation? How did you get it done?
Are there alternatives?
Edit: Additional Info:
The Primary key is clustered.
You could attempt a staged approach.
Create a new bigint column
Create an insert trigger to keep new entries in sync with the 2 columns
Execute an update to populate all the empty values in the bigint column with the converted value
Change the primary index on the table from your old id column to the new one
Point any FK's and queries to use the new column
Change the new column to become your identity column and remove the insert trigger from #2
Delete the old ID column
You should end up spreading the pain out over these 7 steps instead of hitting it all at once.
Create a parallel table with the longer data type for new rows and UNION the results?
What I had to do was copy the data into a new table with the desired structure (primary/clustered key only, non-clustered/FK once complete). If you don't have the room, you could bcp out the data and back in. You may need an application outage to make this happen.
What doesn't work: alter table Orderhistory alter column ID bigint because of the primary key. Don't drop the key and alter column as you will just fill your log file and take much longer than copy/bcp.
Never use the SSMS tools designer to change a column property, it copies table into temp table then does a rename once done. Lookup the alter table alter column syntax and use it and possibly defrag once complete if you modified a column wider that sits in middle of table.
I have various reasons for needing to implement, in addition to the identity column PK, a second, concurrency safe, auto-incrementing column in a SQL Server 2005 database. Being able to have more than one identity column would be ideal, but I'm looking at using a trigger to simulate this as close as possible to the metal.
I believe I have to use a serializable isolation level transaction in the trigger. Do I go about this like Ii would use such a transaction in a normal SQL query?
It is a non-negotiable requirement that the business meaning of the second incrementing column remain separated from the behind the scenes meaning of the first, PK, incrementing column.
To put things as simply as I can, if I create JobCards '0001', '0002', and '0003', then delete JobCards '0002' and '0003', the next Jobcard I create must have ID '0002', not '0004'.
Just an idea, if you have 2 "identity" columns, then surely they would be 'in sync' - if not exactly the same value, then would differ by a constant value. If so, then why not add the "second identity" column as a COMPUTED column, which offsets the primary identity? Or is my logic flawed here?
Edit : As per Martin's comment, note that your calc might need to be N * id + C, where N is the Increment and C the offset / delta - excuse my rusty maths.
For example:
ALTER TABLE MyTable ADD OtherIdentity AS Id * 2 + 1;
Edit
Note that for Sql 2012 and later, that you can now use an independent sequence to create two or more independently incrementing columns in the same table.
Note: OP has edited the original requirement to include reclaiming sequences (noting that identity columns in SQL do not reclaim used ID's once deleted).
I would disallow all the deletes from this table altogether. Instead of deleting, I would mark rows as available or inactive. Instead of inserting, I would first search if there are inactive rows, and reuse the one with the smallest ID if they exist. I would insert only if there are no available rows already in the table.
Of course, I would serialize all inserts and deletes with sp_getapplock.
You can use a trigger to disallow all deletes, it is simpler than filling gaps.
A solution to this issue from "Inside Microsoft SQL Server 2008: T-SQL Querying" is to create another table with a single row that holds the current max value.
CREATE TABLE dbo.Sequence(
val int
)
Then to allocate a range of sufficient size for your insert
CREATE PROC dbo.GetSequence
#val AS int OUTPUT,
#n as int =1
AS
UPDATE dbo.Sequence
SET #val = val = val + #n;
SET #val = #val - #n + 1;
This will block other concurrent attempts to increment the sequence until the first transaction commits.
For a non blocking solution that doesn't handle multi row inserts see my answer here.
This is probably a terrible idea, but it works in at least a limited use scenario
Just use a regular identity and reseed on deletes.
create table reseedtest (
a int identity(1,1) not null,
name varchar(100)
)
insert reseedtest values('erik'),('john'),('selina')
select * from reseedtest
go
CREATE TRIGGER TR_reseedtest_D ON reseedtest FOR DELETE
AS
BEGIN TRAN
DECLARE #a int
SET #a = (SELECT TOP 1 a FROM reseedtest WITH (TABLOCKX, HOLDLOCK))
--anyone know another way to lock a table besides doing something to it?
DBCC CHECKIDENT(reseedtest, reseed, 0)
DBCC CHECKIDENT(reseedtest, reseed)
COMMIT TRAN
GO
delete reseedtest where a >= 2
insert reseedtest values('katarina'),('david')
select * from reseedtest
drop table reseedtest
This won't work if you are deleting from the "middle of the stack" as it were, but it works fine for deletes from the incrementing end.
Reseeding once to 0 then again is just a trick to avoid having to calculate the correct reseed value.
if you never delete from the table, you could create a view with a materialized column that uses ROW_NUMBER().
ALSO, a SQL Server identity can get out of sync with a user generated one, depending on the use of rollback.
Can an auto-incrementing primary key be constrained by artificial limits? For example if I only want integer primary keys to be from a specific range of integers, say between 100 and 999 inclusive, and auto-increment, is that possible? And if so, on which database server software. I'm mainly interested in MS SQL Server 2000 or greater but others might be interesting to know of.
Yes you can do it with an identity column and a check constraint:
CREATE TABLE test(rowid int identity(100,1) primary key)
GO
ALTER TABLE test ADD CONSTRAINT CK_test_Range
CHECK (rowid >= 100 AND rowid < 1000)
GO
INSERT INTO test default values;
GO 900
SELECT * FROM test
GO
DROP TABLE test
If you don't want any gaps between the rowids, it gets a bit more complex.
OK you can do this as shown above, but be aware that the smaller the range, the more likely you will be to reach the point where no data can be put into the table becasue the range has been reached. And remember every rolled back transaction or deleted record takes up part of the range. I'd think very seriously before taking such a step or at least give it a range far greater than any possible number of records that you will ever have in the table.
Try putting a check constraint on the table to ensure the primary key is valid:
ALTER TABLE MyTable
ADD CONSTRAINT MyPrimaryKeyConstraint
CHECK (PrimaryKey >= 100 AND PrimaryKey <= 999)
Change "MyTable" to your table name, "MyPrimaryKeyConstraint" to whatever descriptive name you would like, and "PrimaryKey" to the column name of the primary key.
You can change the starting value using DBCC CHECKIDENT(<tablename>, RESEED, <newstart>); You can restrict the upper range with an ordinary CHECK constraint.
Keep in mind that an IDENTITY column can have gaps in the sequence of numbers. Unused numbers are not automatically reused. So a range of 100 to 999 does not mean your table will permit exactly 900 rows - it could be something less than that.