Resetting the primary key to 1 - sql-server

I have a script for microsoft sql server database which has hundreds of tables and tables contains data as well. This is the database of a web application.what I want to do is to delete the previous records and reset the primary key to 1 or 0.
I have tried
`DBCC CHECKIDENT ('dbo.tbl',RESEED,0); `
but it does not work for me as in most of the tables the primary key is not identity.
I can not truncate the table as its primary key is being used as FK in many other tables.
I have also tried to add the identity specification in the primary key of the table and run the checkident query and then changing it back to non-identity spec, but after adding the record again it starts from where it left.
Making changes in the code is not an option for me.
please help.

According with your question I am not sure about the main objective, Why? If you need truncate a lot of tables and change their structures to have an Identity property why you can't disabled the FK? . In the past I have used an standard process for rebuild a table and migrate all the information, this represent a group of steps, I would try to help you but you should follow the next steps.
Steps:
1) Disable FK for alter the structure of your tables. You can get the solution for this task in the next link:
Temporarily disable all foreign key constraints
2) Alter the table with the new property Identity, this is a classic process of ALTER TABLE xxxxxx.
3) Execute the syntax that previously posted :
DBCC CHECKIDENT ('dbo.tbl',RESEED,0);
Try to follow this path and if you have any problem only ask us.

You can not truncate table that have relation. You shoud remove relation firstly.

My understanding of this question:
You have a database with tables that you want to empty and next have them use primary key values starting at 0 or 1.
Some of these tables use an identity value and you already have a solution for those (you know you can find out which columns have an identity by using the sys.columns view? Look for the is_identity column).
Some tables do not use an identity but get their pk values from an unknown source, which we can't modify.
The only solution I see, is creating an after insert trigger (or modifying) on those tables that subtracts from the new pk value.
E.g.: your "hidden generator" will generate a next value 5254, but you want the next pk value to become one:
CREATE TRIGGER trg_sometable_ai
ON sometable
AFTER INSERT
AS
BEGIN
UPDATE st
SET st.pk_col = st.pk_col - 5253
FROM sometable AS st
INNER JOIN INSERTED AS i
ON i.pk_col = th.pk_col
END
You'll have to determine the next value and thus the "subtract value" for each table.
If the code also inserts child records into tables with a foreign key to this table, and uses the previously generated value, you have to modify those triggers as well...
This is a "last resort" solution and something I would recommend against in any scenario that has other options. Manipulating primary key values is generally not a good idea.

Related

Change dependent records on delete in SQL

I'm adding a new job category to a database. There are something like 20 tables that use jobCategoryID as a foreign key. Is there a way to create a function that would go through those tables and set the jobCategoryID to NULL if the category is ever deleted in the parent table? Inserting the line isn't the issue. It's just for a backout script if the product owners decide at a later date that they don't want to keep the new job category on.
You need some action. First of all update the dirty records to NULL. For each table use:
Update parent_table
Set jobCategoryID = NULL
WHERE jobCategoryID NOT IN (select jobCategoryID FROM Reerenced_tabble)
Then set delete rule of foreign keys to SET NULL.
If you care about performance issue, follow the below instruction too.
When you have foreign key but dirty records it means, that these constraints are not trusted. It means that SQL Optimizer can not use them for creating best plans. So run these code to see which of them are untrusted to optimizer:
Select * from sys.foreign_keys Where is_not_trusted = 1
For each constraint that become in result of above code edit below code to solve this issue:
ALTER TABLE Table_Name WITH CHECK CHECK CONSTRAINT FK_Name

Is there a way to update primary key Identity specification Increment 1 without dropping Foreign Keys?

I am trying to change a primary key Id to identity to increment 1 on each entry. But the column has been referenced already by other tables. Is there any way to set primary key to auto increment without dropping the foreign keys from other tables?
If the table isn't that large generate script to create an identical table but change the schema it created to:
CREATE TABLE MYTABLE_NEW (
PK INT PRIMARY KEY IDENTITY(1,1),
COL1 TYPEx,
COL2 TYPEx,
COLn
...)
Set your database to single-user mode or make sure no one is in the
database or tables you're changing or change the table you need to
change to READ/ONLY.
Import your data into MYTABLE_NEW from MYTABLE using set IDENTITY_INSERT on
Script your foreign key constraints and save them--in case you need
to back out of your change later and/or re-implement them.
Drop all the constraints from MYTABLE
Rename MYTABLE to MYTABLE_SAV
Rename MYTABLE_NEW to MYTABLE
Run constraint scripts to re-implement constraints on MYTABLE
p.s.
you did ask if there was a way to not drop the foreign key constraints. Here's something to try on your test system. on Step 4 run
ALTER TABLE MYTABLE NOCHECK CONSTRAINT ALL
and on Step 7 ALTER TABLE MYTABLE CHECK CONSTRAINT ALL. I've not tried this myself -- interesting to see if this would actually work on renamed tables.
You can script all this ahead of time on a test SQL Server or even a copy of the database staged on a production server--to make implementation day a no-brainer and gauge your SLAs for any change control procedures for your company.
You can do a similar methodology by deleting the primary key and re-adding it back, but you'll need to have the same data inserted in the new column before you delete the old column. So you'll be deleting and inserting schema and inserting primary key data with this approach. I like to avoid touching a production table if at all possible and having MYTABLE_SAV around in case "anything" unexpected occurs is a comfort to me personally--as I can tell management "the production data was not touched". But some tables are simply too large for this approach to be worthwhile and, also, tastes and methodologies differ largely from DBA to DBA.

Converting int primary key to bigint in Sql Server

We have a production table with 770 million rows and change. We want(/need?) to change the Primary ID column from int to bigint to allow for future growth (and to avoid the sudden stop when the 32bit integer space is exhausted)
Experiments in DEV have shown that this is not as simple as altering the column as we would need to drop the index and then re-create it. So far in DEV (which is a bit humbler than PROD) the dropping of the index has not finished after 1 and a half hours. This table is hit 24/7 and having it offline for such a long time is not an option.
Has anyone else had to deal with a similar situation? How did you get it done?
Are there alternatives?
Edit: Additional Info:
The Primary key is clustered.
You could attempt a staged approach.
Create a new bigint column
Create an insert trigger to keep new entries in sync with the 2 columns
Execute an update to populate all the empty values in the bigint column with the converted value
Change the primary index on the table from your old id column to the new one
Point any FK's and queries to use the new column
Change the new column to become your identity column and remove the insert trigger from #2
Delete the old ID column
You should end up spreading the pain out over these 7 steps instead of hitting it all at once.
Create a parallel table with the longer data type for new rows and UNION the results?
What I had to do was copy the data into a new table with the desired structure (primary/clustered key only, non-clustered/FK once complete). If you don't have the room, you could bcp out the data and back in. You may need an application outage to make this happen.
What doesn't work: alter table Orderhistory alter column ID bigint because of the primary key. Don't drop the key and alter column as you will just fill your log file and take much longer than copy/bcp.
Never use the SSMS tools designer to change a column property, it copies table into temp table then does a rename once done. Lookup the alter table alter column syntax and use it and possibly defrag once complete if you modified a column wider that sits in middle of table.

Detailed error message for violation of Primary Key constraint in sql2008?

I'm inserting a large amount of rows into an empty table with a primary key constraint on one column.
If there is a duplicate key error, is there any way to find out the value of the key (or row) that caused the error?
Validating the data prior to the insert is sadly not something I can do right now.
Using SQL 2008.
Thanks!
Doing the count(*) / group by thing is something I'm trying to avoid, this is an insert of hundreds of millions of rows from hundreds of different DB's (some of which are on remote servers)...I don't have the time or space to do the insert twice.
The data is supposed to be unique from the providers, but unfortunately their validation doesn't seem to work correctly 100% of the time and I'm trying to at least see where it's failing so I can help them troubleshoot.
Thank you!
There's not a way of doing it that won't slow your process down, but here's one way that will make it easier. You can add an instead-of trigger on that table for inserts and updates. The trigger will check each record before inserting it and make sure it won't cause a primary key violation. You can even create a second table to catch violations, and have a different primary key (like an identity field) on that one, and the trigger will insert the rows into your error-catching table.
Here's an example of how the trigger can work:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
In that example, I'm checking a field to make sure it's numeric before I insert the data into the table. You'll need to modify that code to check for primary key violations instead - for example, you might join the INSERTED table to your own existing table and only insert rows where you don't find a match.
The solution would depend on how often this happens. If it's <10% of the time then I would do the following:
Insert the data
If error then do Bravax's revised solution (remove constraint, insert, find dup, report and kill dup, enable constraint).
This means it's only costing you on the few times an error occurs.
If this is happening more often then I'd look at sending the boys over to see the providers :-)
Revised:
Since you don't want to insert twice, could you:
Drop the primary key constraint.
Insert all data into the table
Find any duplicates, and remove them
Then re-add the primary key constraint
Previous reply:
Insert the data into a duplicate of the table without the primary key constraint.
Then run a query on it to determine rows which have duplicate values for the rpimary key column.
select count(*), <Primary Key>
from table
group by <Primary Key>
having count(*) > 1
Use SSIS to import the data and have it check for this as part of the data flow. That is the best way to handle. SSIS can send the bad records to a table (that you can later send to the vendor to help them clean up their act) and process the good ones.
I can't believe that SSIS does not easily address this "reality", because, let's face it, oftentimes you need and want to be able to:
See if a record exists with a certain unique or primary key
If it does not, insert it
If it does, either ignore it or update it.
I don't understand how they would let a product out the door without this capability built-in in an easy-to-use manner. Like, say, set an attribute of a component to automatically check this.

How to increment (or reserve) IDENTITY value in SQL Server without inserting into table

Is there a way to reserve or skip or increment value of identity column?
I Have two tables joined in one-to-one relation ship. First one has IDENTITY PK column, and second one int PK (not IDENTITY). I used to insert in first, get ID and insert in second. And it works ok.
Now I need to insert values in second table without inserting into first.
Now, how to increment IDENTITY seed, so I can insert it into second table, but leave "hole" in ID's of first table?
EDIT: More info
This works:
-- I need new seed number, but not table row
-- so i will insert foo row, get id, and delete it
INSERT INTO TABLE1 (SomeRequiredField) VALUES ('foo');
SET #NewID = SCOPE_IDENTITY();
DELETE FROM TABLE1 WHERE ID=#NewID;
-- Then I can insert in TABLE2
INSERT INTO (ID, Field, Field) VALUES (#NewID, 'Value', 'Value');
Once again - this works.
Question is can I get ID without inserting into table?
DBCC needs owner rights; is there a clean user callable SQL to do that?
This situation will make your overall data structure very hard to understand. If there is not a relationship between the values, then break the relationship.
There are ways to get around this to do what you are looking for, but typically it is in a distributed environment and not done because of what appears to be a data model change.
Then its no more a one-to-one relationship.
Just break the PK constraint.
Use a DBCC CHECKIDENT statement.
This article from SQL Server Books Online discusses the use of the DBCC CHECKIDENT method to update the identity seed of a table.
From that article:
This example forces the current identity value in the jobs table to a value of 30.
USE pubs
GO
DBCC CHECKIDENT (jobs, RESEED, 30)
GO
I would look into the OUTPUT INTO feature if you are using SQL Server 2005 or greater. This would allow you to insert into your primary table, and take the IDs assigned at that time to create rows in the secondary table.
I am assuming that there is a foreign key constraint enforced - because that would be the only reason you would need to do this in the first place.
How do you plan on matching them up later? I would not put records into the second table without a record in the first, that is why it is set up in a foreign key relationship - to stio that sort of action. Just why do you not want to insert records into the first table anyway? If we knew more about the type of application and why this is necessary we might be able to guide you to a solution.
this might help
SET IDENTITY_INSERT [ database_name . [ schema_name ] . ] table { ON | OFF }
http://msdn.microsoft.com/en-us/library/aa259221(SQL.80).aspx
It allows explicit values to be inserted into the identity column of a table.

Resources