I've deleted some records from a table in a SQL Server database.
The IDs in the table look like this:
99
100
101
1200
1201...
I want to delete the later records (IDs >1200), then I want to reset the auto increment so the next autogenerated ID will be 102. So my records are sequential, Is there a way to do this in SQL Server?
Issue the following command to reseed mytable to start at 1:
DBCC CHECKIDENT (mytable, RESEED, 0)
Read about it in the Books on Line (BOL, SQL help). Also be careful that you don't have records higher than the seed you are setting.
DBCC CHECKIDENT('databasename.dbo.tablename', RESEED, number)
if number = 0 then in the next insert the auto increment field will contain value 1
if number = 101 then in the next insert the auto increment field will contain value 102
Some additional info... May be useful to you
Before giving auto increment number in above query, you have to make sure your existing table's auto increment column contain values less that number.
To get the maximum value of a column(column_name) from a table(table1), you can use following query
SELECT MAX(column_name) FROM table1
semi idiot-proof:
declare #max int;
select #max = max(key) from table;
dbcc checkident(table,reseed,#max)
http://sqlserverplanet.com/tsql/using-dbcc-checkident-to-reseed-a-table-after-delete
If you're using MySQL, try this:
ALTER TABLE tablename AUTO_INCREMENT = 1
I figured it out. It's:
DBCC CHECKIDENT ('tablename', RESEED, newseed)
Delete and Reseed all the tables in a database.
USE [DatabaseName]
EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all" -- Disable All the constraints
EXEC sp_MSForEachTable "DELETE FROM ?" -- Delete All the Table data
Exec sp_MSforeachtable 'DBCC CHECKIDENT(''?'', RESEED, 0)' -- Reseed All the table to 0
Exec sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all" -- Enable All the constraints back
-- You may ignore the errors that shows the table without Auto increment field.
Based on the accepted answer, for those who encountered a similar issue, with full schema qualification:
([MyDataBase].[MySchemaName].[MyTable])... results in an error, you need to be in the context of that DB
That is, the following will throw an error:
DBCC CHECKIDENT ([MyDataBase].[MySchemaName].[MyTable], RESEED, 0)
Enclose the fully-qualified table name with single quotes instead:
DBCC CHECKIDENT ('[MyDataBase].[MySchemaName].[MyTable]', RESEED, 0)
Several answers recommend using a statement something like this:
DBCC CHECKIDENT (mytable, RESEED, 0)
But the OP said "deleted some records", which may not be all of them, so a value of 0 is not always the right one. Another answer suggested automatically finding the maximum current value and reseeding to that one, but that runs into trouble if there are no records in the table, and thus max() will return NULL. A comment suggested using simply
DBCC CHECKIDENT (mytable)
to reset the value, but another comment correctly stated that this only increases the value to the maximum already in the table; this will not reduce the value if it is already higher than the maximum in the table, which is what the OP wanted to do.
A better solution combines these ideas. The first CHECKIDENT resets the value to 0, and the second resets it to the highest value currently in the table, in case there are records in the table:
DBCC CHECKIDENT (mytable, RESEED, 0)
DBCC CHECKIDENT (mytable)
As multiple comments have indicated, make sure there are no foreign keys in other tables pointing to the deleted records. Otherwise those foreign keys will point at records you create after reseeding the table, which is almost certainly not what you had in mind.
I want to add this answer because the DBCC CHECKIDENT-approach will product problems when you use schemas for tables. Use this to be sure:
DECLARE #Table AS NVARCHAR(500) = 'myschema.mytable';
DBCC CHECKIDENT (#Table, RESEED, 0);
If you want to check the success of the operation, use
SELECT IDENT_CURRENT(#Table);
which should output 0 in the example above.
You do not want to do this in general. Reseed can create data integrity problems. It is really only for use on development systems where you are wiping out all test data and starting over. It should not be used on a production system in case all related records have not been deleted (not every table that should be in a foreign key relationship is!). You can create a mess doing this and especially if you mean to do it on a regular basis after every delete. It is a bad idea to worry about gaps in you identity field values.
What about this?
ALTER TABLE `table_name`
MODIFY `id` int(12) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=0;
This is a quick and simple way to change the auto increment to 0 or whatever number you want. I figured this out by exporting a database and reading the code myself.
You can also write it like this to make it a single-line solution:
ALTER TABLE `table_name` MODIFY `id` int(12) NOT NULL AUTO_INCREMENT, AUTO_INCREMENT=0;
To reset every key in the database to autoincrement from the max of the last highest key:
Exec sp_MSforeachtable 'DBCC CHECKIDENT(''?'', RESEED, 0)'
Exec sp_MSforeachtable 'DBCC CHECKIDENT(''?'', RESEED)'
If you just want to reset the primary key/sequence to start with a sequence you want in a SQL server, here's the solution -
IDs:
99 100 101 1200 1201...
Drop rows with IDs >= 1200. (Be careful if you have foreign key constraints tied to these which has to be dealed with or deleted too to be able to delete this.)
Now you want to make sure you know the MAX ID to be sure:
declare #max_id as int = (SELECT MAX(your_column_name) FROM
your_table)+1;
(Note: +1 to start the ID sequence after max value)
Restart your sequence:
exec('alter sequence your_column_name restart with ' + #max_id);
(Note: Space after with is necessary)
Now new records will start with 102 as the ID.
I know this is an old question. However, I was looking for a similar solution for MySQL and this question showed up.
for those who are looking for MySQL solution, you need to run this query:
// important!!! You cannot reset the counter to a value less than or equal to the value that is currently in use. For both InnoDB and MyISAM, if the value is less than or equal to the maximum value currently in the AUTO_INCREMENT column, the value is reset to the current maximum AUTO_INCREMENT column value plus one.
ALTER TABLE <your-table-name> AUTO_INCREMENT = 100
documentation
Related
In SQL Server 2012, the following query is seeding the identity column myTable_id from 2 instead of 1. Why? myTable_id is also PK.
DELETE FROM myTable;
GO
SELECT * FROM myTable --0 rows are returned as expected
GO
DBCC CHECKIDENT(myTable, RESEED,1)
GO
INSERT INTO myTable(col1,col2,col3) SELECT FROM AnotherTable(col1,col2,col3)
GO
SELECT * FROM myTable --1005 rows are returned as expected, but identity value starts from 2
GO
Remark:
The data inserted is right, the only issue is that the newly inserted data starts from 2 instead of 1.
In the above sql code if I use DBCC CHECKIDENT(myTable, RESEED,0) the identity column correctly starts from 1.
Following is snapshot in SSMS for the myTable_id column:
From the docs:
The seed value is the value inserted into an identity column for the very first row loaded into the table. All subsequent rows contain the current identity value plus the increment value where current identity value is the last identity value generated for the table or view.
So if you seed from 10, the next value to be inserted will be 11.
There is nothing bad with the answer here but the confusion comes from Microsoft approach itself.
I think that:
DBCC CHECKIDENT(myTable, RESEED, 0)
Should have the same behavior everywhere:
on new created table,
after delete table records,
after truncating the table
Otherwise we need to check the table status before running this.
Works as expected see also
https://learn.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkident-transact-sql#examples
The value 1 means that the current identity will be at 1 and the next identity will start on 2
To get it starting on 1 you should do
DBCC CHECKIDENT(myTable, RESEED, 0)
This does the trick for me:
DBCC CHECKIDENT ([Table], RESEED, 0)
DBCC CHECKIDENT ([Table], RESEED)
Just a general question.
I have a table with IDENTITY PK which is not connected with any table through.
There is another and only FK in the table.
I run DELETE command on that table by some condition.
I can INSERT any new records into the table with auto-inserted next PK IDs.
BUT there is no re-using of ID numbers in PK.
If I run something like
DECLARE #max_PKid BIGINT;
SET #max_PKid = (SELECT ISNULL(MAX(PKid), 0) FROM Table WHERE FKid=#somevalue);
DBCC CHECKIDENT ('Table', reseed, #max_PKid)
right after DELETE, there will be access violation problems on next INSERT
Question 1: Is it good practice in general having intervals in unordered (say, unseeded) PKids in the table after doing DELETE/INSERT without using DBCC CHECKIDENT? Should I care on them?
Question 2: If not, what can I do about?
No you should not worry. There are also other circumstances in which you can get a 'hole' in an IDENTITY range. For example, if you start a transaction, insert 100,000 rows into a table, then rollback that transaction - those IDENTITY values are then gone. This is not something you should be concerned about.
I am using SqlServer 2005 and I have a table in which I have an auto incrementing column but for some reason the auto increment field does not start with 1 but with some random number like 21,91. Why does that happen?
You either need to set the Seed for the column.... or if you had entered rows previously, you need to execute a TRUNCATE TABLE command on the table...
TRUNCATE TABLE XYZ
mssql is not using max(id) + 1 as identity like other databases. It is storing the last used id and is incrementing it.
You can reseed the identity:
DBCC CHECKIDENT ('tablex', RESEED, 1)
or truncate the table, this is also deleting all the data:
TRUNCATE TABLE tablex
You can of course combine the identity reseed with the last value:
DBCC CHECKIDENT ('tablex', RESEED, (SELECT max(id) + 1 FROM tablex))
But be aware of producing errors on reseeding the id due to conflicts, the auto increment id is unique!
We have already created the database framework, with all the relations and dependencies. But inside the tables were just the dummy data, and we need to get rid of these dummy data, and start adding the correct ones. How can we clear everything and leave the primary keys (IsIdentity: yes) back to zero, and also without affecting the foreign-table relational structure.
Thanks a lot!
You can take the following steps:
-- disable all foreign key constraints
EXEC sp_msforeachtable "ALTER TABLE ? NOCHECK CONSTRAINT all"
-- delete data in all tables
EXEC sp_MSForEachTable "DELETE FROM ?"
-- enable all constraints
exec sp_msforeachtable "ALTER TABLE ? WITH CHECK CHECK CONSTRAINT all"
More on disabling constraints and triggers here
if some of the tables have identity columns we may want to reseed them
EXEC sp_MSforeachtable "DBCC CHECKIDENT ( '?', RESEED, 0)"
Note that the behaviour of RESEED differs between brand new table, and one which had had some date inserted previously from BOL:
DBCC CHECKIDENT ('table_name', RESEED, newReseedValue)
The current identity value is set to
the newReseedValue. If no rows have
been inserted to the table since it
was created, the first row inserted
after executing DBCC CHECKIDENT will
use newReseedValue as the identity.
Otherwise, the next row inserted will
use newReseedValue + 1. If the value
of newReseedValue is less than the
maximum value in the identity column,
error message 2627 will be generated
on subsequent references to the table.
I'd use TRUNCATE:
-- Truncate table (very fast, and it resets Identity as well)
EXEC sp_MSForEachTable "TRUNCATE TABLE ?"
Of course, disable and re-enable check constraints too, as suggested by kristof.
Reseeding:
DBCC CHECKIDENT (yourtable, reseed, 1)
is for setting the primary key on 1
delete from table should delete the data, but not affect any other things.
Generate Database schema-only sql file using Database publishing wizard and enable the option to drop the tables if exists. Run this script on your database and This will flush everything and give you fresh schema as you need.
To add a NOT NULL Column to a table with many records, a DEFAULT constraint needs to be applied. This constraint causes the entire ALTER TABLE command to take a long time to run if the table is very large. This is because:
Assumptions:
The DEFAULT constraint modifies existing records. This means that the db needs to increase the size of each record, which causes it to shift records on full data-pages to other data-pages and that takes time.
The DEFAULT update executes as an atomic transaction. This means that the transaction log will need to be grown so that a roll-back can be executed if necessary.
The transaction log keeps track of the entire record. Therefore, even though only a single field is modified, the space needed by the log will be based on the size of the entire record multiplied by the # of existing records. This means that adding a column to a table with small records will be faster than adding a column to a table with large records even if the total # of records are the same for both tables.
Possible solutions:
Suck it up and wait for the process to complete. Just make sure to set the timeout period to be very long. The problem with this is that it may take hours or days to do depending on the # of records.
Add the column but allow NULL. Afterward, run an UPDATE query to set the DEFAULT value for existing rows. Do not do UPDATE *. Update batches of records at a time or you'll end up with the same problem as solution #1. The problem with this approach is that you end up with a column that allows NULL when you know that this is an unnecessary option. I believe that there are some best practice documents out there that says that you should not have columns that allow NULL unless it's necessary.
Create a new table with the same schema. Add the column to that schema. Transfer the data over from the original table. Drop the original table and rename the new table. I'm not certain how this is any better than #1.
Questions:
Are my assumptions correct?
Are these my only solutions? If so, which one is the best? I f not, what else could I do?
I ran into this problem for my work also. And my solution is along #2.
Here are my steps (I am using SQL Server 2005):
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn varchar(40) DEFAULT('')
2) Add a NOT NULL constraint with the NOCHECK option. The NOCHECK does not enforce on existing values:
ALTER TABLE MyTable WITH NOCHECK
ADD CONSTRAINT MyColumn_NOTNULL CHECK (MyColumn IS NOT NULL)
3) Update the values incrementally in table:
GO
UPDATE TOP(3000) MyTable SET MyColumn = '' WHERE MyColumn IS NULL
GO 1000
The update statement will only update maximum 3000 records. This allow to save a chunk of data at the time. I have to use "MyColumn IS NULL" because my table does not have a sequence primary key.
GO 1000 will execute the previous statement 1000 times. This will update 3 million records, if you need more just increase this number. It will continue to execute until SQL Server returns 0 records for the UPDATE statement.
Here's what I would try:
Do a full backup of the database.
Add the new column, allowing nulls - don't set a default.
Set SIMPLE recovery, which truncates the tran log as soon as each batch is committed.
The SQL is: ALTER DATABASE XXX SET RECOVERY SIMPLE
Run the update in batches as you discussed above, committing after each one.
Reset the new column to no longer allow nulls.
Go back to the normal FULL recovery.
The SQL is: ALTER DATABASE XXX SET RECOVERY FULL
Backup the database again.
The use of the SIMPLE recovery model doesn't stop logging, but it significantly reduces its impact. This is because the server discards the recovery information after every commit.
You could:
Start a transaction.
Grab a write lock on your original table so no one writes to it.
Create a shadow table with the new schema.
Transfer all the data from the original table.
execute sp_rename to rename the old table out.
execute sp_rename to rename the new table in.
Finally, you commit the transaction.
The advantage of this approach is that your readers will be able to access the table during the long process and that you can perform any kind of schema change in the background.
Just to update this with the latest information.
In SQL Server 2012 this can now be carried out as an online operation in the following circumstances
Enterprise Edition only
The default must be a runtime constant
For the second requirement examples might be a literal constant or a function such as GETDATE() that evaluates to the same value for all rows. A default of NEWID() would not qualify and would still end up updating all rows there and then.
For defaults that qualify SQL Server evaluates them and stores the result as the default value in the column metadata so this is independent of the default constraint which is created (which can even be dropped if no longer required). This is viewable in sys.system_internals_partition_columns. The value doesn't get written out to the rows until next time they happen to get updated.
More details about this here: online non-null with values column add in sql server 2012
Admitted that this is an old question. My colleague recently told me that he was able to do it in one single alter table statement on a table with 13.6M rows. It finished within a second in SQL Server 2012. I was able to confirm the same on a table with 8M rows. Something changed in later version of SQL Server?
Alter table mytable add mycolumn char(1) not null default('N');
I think this depends on the SQL flavor you are using, but what if you took option 2, but at the very end alter table table to not null with the default value?
Would it be fast, since it sees all the values are not null?
If you want the column in the same table, you'll just have to do it. Now, option 3 is potentially the best for this because you can still have the database "live" while this operation is going on. If you use option 1, the table is locked while the operation happens and then you're really stuck.
If you don't really care if the column is in the table, then I suppose a segmented approach is the next best. Though, I really try to avoid that (to the point that I don't do it) because then like Charles Bretana says, you'll have to make sure and find all the places that update/insert that table and modify those. Ugh!
I had a similar problem, and went for your option #2.
It takes 20 minutes this way, as opposed to 32 hours the other way!!! Huge difference, thanks for the tip.
I wrote a full blog entry about it, but here's the important sql:
Alter table MyTable
Add MyNewColumn char(10) null default '?';
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 0 and 1000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 1000000 and 2000000
go
update MyTable set MyNewColumn='?' where MyPrimaryKey between 2000000 and 3000000
go
..etc..
Alter table MyTable
Alter column MyNewColumn char(10) not null;
And the blog entry if you're interested:
http://splinter.com.au/adding-a-column-to-a-massive-sql-server-table
I had a similar problem and I went with modified #3 approach. In my case the database was in SIMPLE recovery mode and the table to which column was supposed to be added was not referenced by any FK constraints.
Instead of creating a new table with the same schema and copying contents of original table, I used SELECT…INTO syntax.
According to Microsoft (http://technet.microsoft.com/en-us/library/ms188029(v=sql.105).aspx)
The amount of logging for SELECT...INTO depends on the recovery model
in effect for the database. Under the simple recovery model or
bulk-logged recovery model, bulk operations are minimally logged. With
minimal logging, using the SELECT… INTO statement can be more
efficient than creating a table and then populating the table with an
INSERT statement. For more information, see Operations That Can Be
Minimally Logged.
The sequence of steps :
1.Move data from old table to new while adding new column with default
SELECT table.*, cast (‘default’ as nvarchar(256)) new_column
INTO table_copy
FROM table
2.Drop old table
DROP TABLE table
3.Rename newly created table
EXEC sp_rename 'table_copy', ‘table’
4.Create necessary constraints and indexes on the new table
In my case the table had more than 100 million rows and this approach completed faster than approach #2 and log space growth was minimal.
1) Add the column to the table with a default value:
ALTER TABLE MyTable ADD MyColumn int default 0
2) Update the values incrementally in the table (same effect as accepted answer). Adjust the number of records being updated to your environment, to avoid blocking other users/processes.
declare #rowcount int = 1
while (#rowcount > 0)
begin
UPDATE TOP(10000) MyTable SET MyColumn = 0 WHERE MyColumn IS NULL
set #rowcount = ##ROWCOUNT
end
3) Alter the column definition to require not null. Run the following at a moment when the table is not in use (or schedule a few minutes of downtime). I have successfully used this for tables with millions of records.
ALTER TABLE MyTable ALTER COLUMN MyColumn int NOT NULL
I would use CURSOR instead of UPDATE. Cursor will update all matching records in batch, record by record -- it takes time but not locks table.
If you want to avoid locks use WAIT.
Also I am not sure, that DEFAULT constrain changes existing rows.
Probably NOT NULL constrain use together with DEFAULT causes case described by author.
If it changes add it in the end
So pseudocode will look like:
-- without NOT NULL constrain -- we will add it in the end
ALTER TABLE table ADD new_column INT DEFAULT 0
DECLARE fillNullColumn CURSOR LOCAL FAST_FORWARD
SELECT
key
FROM
table WITH (NOLOCK)
WHERE
new_column IS NULL
OPEN fillNullColumn
DECLARE
#key INT
FETCH NEXT FROM fillNullColumn INTO #key
WHILE ##FETCH_STATUS = 0 BEGIN
UPDATE
table WITH (ROWLOCK)
SET
new_column = 0 -- default value
WHERE
key = #key
WAIT 00:00:05 --wait 5 seconds, keep in mind it causes updating only 12 rows per minute
FETCH NEXT FROM fillNullColumn INTO #key
END
CLOSE fillNullColumn
DEALLOCATE fillNullColumn
ALTER TABLE table ALTER COLUMN new_column ADD CONSTRAIN xxx
I am sure that there are some syntax errors, but I hope that this
help to solve your problem.
Good luck!
Vertically segment the table. This means you will have two tables, with the same primary key, and exactly the same number of records... One will be the one you already have, the other will have just the key, and the new Non-Null column (with default value) .
Modify all Insert, Update, and delete code so they keep the two tables in synch... If you want you can create a view that "joins" the two tables together to create a single logical combination of the two that appears like a single table for client Select statements...