SQL Server constraint not deleting - sql-server

So, first off, we're using a history system where there's a history version of all of our tables. I'm adding a new column to a table, so I add it to both the dbo and the hist versions. I add a constraint that the default should be 0. However, I misspelled the column for the hist table. So, I add the correct one, and go to drop the incorrect one. However, it's saying The object 'DF__FCRelease__NumRe_125D4E50' is dependent on column 'NumRelease'. This is the constraint I mentioned above. I've tried deleting it though SQL, which claims it completes successfully, but the constraint remains. I've tried deleting it through the GUI, but it claims the object does not exist. Here's the SQL I attempted to use to delete the constraint:
IF EXISTS (SELECT * FROM dbo.sysobjects WHERE id = OBJECT_ID(N'[DF__FCRelease__NumRe__125D4E50]') AND type = 'D')
BEGIN
ALTER TABLE [hist].[FCRelease] DROP CONSTRAINT [DF__FCRelease__NumRe__125D4E50]
END
Any ideas?

This, I believe, is related to a similar question. So, I'll give you a similar answer.
You probably need rebuild your master database. I suspect that there is some corruption there that is causing this issue.
As a workaround, you could try removing the column, re-adding it, and then re-adding the constraint (a la here). That might fix the problem as well.

Related

SNOWFLAKE : Constraint on table that doesn't exists

I use INFORMATON_SCHEMA.TABLE_CONSTRAINTS in a stored procedure to dynamically generate a uniqueness check on the tables.
It works fine until I have a constraint that refers to a table that does not exist or no longer exists.
Have you ever been confronted with this situation? Is there a way to clean the information_schema
thanks
Join to SNOWFLAKE.ACCOUNT_USAGE.TABLES, you'll find column DELETED there, use it for filtering.

Change dependent records on delete in SQL

I'm adding a new job category to a database. There are something like 20 tables that use jobCategoryID as a foreign key. Is there a way to create a function that would go through those tables and set the jobCategoryID to NULL if the category is ever deleted in the parent table? Inserting the line isn't the issue. It's just for a backout script if the product owners decide at a later date that they don't want to keep the new job category on.
You need some action. First of all update the dirty records to NULL. For each table use:
Update parent_table
Set jobCategoryID = NULL
WHERE jobCategoryID NOT IN (select jobCategoryID FROM Reerenced_tabble)
Then set delete rule of foreign keys to SET NULL.
If you care about performance issue, follow the below instruction too.
When you have foreign key but dirty records it means, that these constraints are not trusted. It means that SQL Optimizer can not use them for creating best plans. So run these code to see which of them are untrusted to optimizer:
Select * from sys.foreign_keys Where is_not_trusted = 1
For each constraint that become in result of above code edit below code to solve this issue:
ALTER TABLE Table_Name WITH CHECK CHECK CONSTRAINT FK_Name

Resetting the primary key to 1

I have a script for microsoft sql server database which has hundreds of tables and tables contains data as well. This is the database of a web application.what I want to do is to delete the previous records and reset the primary key to 1 or 0.
I have tried
`DBCC CHECKIDENT ('dbo.tbl',RESEED,0); `
but it does not work for me as in most of the tables the primary key is not identity.
I can not truncate the table as its primary key is being used as FK in many other tables.
I have also tried to add the identity specification in the primary key of the table and run the checkident query and then changing it back to non-identity spec, but after adding the record again it starts from where it left.
Making changes in the code is not an option for me.
please help.
According with your question I am not sure about the main objective, Why? If you need truncate a lot of tables and change their structures to have an Identity property why you can't disabled the FK? . In the past I have used an standard process for rebuild a table and migrate all the information, this represent a group of steps, I would try to help you but you should follow the next steps.
Steps:
1) Disable FK for alter the structure of your tables. You can get the solution for this task in the next link:
Temporarily disable all foreign key constraints
2) Alter the table with the new property Identity, this is a classic process of ALTER TABLE xxxxxx.
3) Execute the syntax that previously posted :
DBCC CHECKIDENT ('dbo.tbl',RESEED,0);
Try to follow this path and if you have any problem only ask us.
You can not truncate table that have relation. You shoud remove relation firstly.
My understanding of this question:
You have a database with tables that you want to empty and next have them use primary key values starting at 0 or 1.
Some of these tables use an identity value and you already have a solution for those (you know you can find out which columns have an identity by using the sys.columns view? Look for the is_identity column).
Some tables do not use an identity but get their pk values from an unknown source, which we can't modify.
The only solution I see, is creating an after insert trigger (or modifying) on those tables that subtracts from the new pk value.
E.g.: your "hidden generator" will generate a next value 5254, but you want the next pk value to become one:
CREATE TRIGGER trg_sometable_ai
ON sometable
AFTER INSERT
AS
BEGIN
UPDATE st
SET st.pk_col = st.pk_col - 5253
FROM sometable AS st
INNER JOIN INSERTED AS i
ON i.pk_col = th.pk_col
END
You'll have to determine the next value and thus the "subtract value" for each table.
If the code also inserts child records into tables with a foreign key to this table, and uses the previously generated value, you have to modify those triggers as well...
This is a "last resort" solution and something I would recommend against in any scenario that has other options. Manipulating primary key values is generally not a good idea.

Error occurred while changing is Identity to no in SQL Server

I have to change the auto increment on ID to explicitly define ID. For this I Go to
datatabse-> tables -> mytable -> design. There I set is dentity (under identity specification) to No. But when I click save it throws an error saying.
Saving changes is not permitted. The changes you have made require the following tables to
be droped and re created....
Is there any way to do it without dropping the table. I searched this error and got the solution to run a following query
SET IDENTITY_INSERT mytable ON GO
But when I try to insert from code, it throws error that
Cannot insert explicit value for identity column in table 'mytable' when IDENTITY_INSERT is set to OFF
Is there any way to get out of this problem
Once identity, always identity. You cannot change the identity property on a column. Technically, you could use IDENTITY_INSERT to get around it, but this requires setting the option on every single insert you do (this setting doesn't persist over sessions). This is probably not what you want.
Your only alternative, if recreating the table isn't an option, is to create a new column that isn't an identity column, then dropping the old one:
ALTER TABLE MyTable ADD NotAnID INT NULL;
GO
BEGIN TRANSACTION
UPDATE MyTable SET NotAnID = ID;
ALTER TABLE MyTable ALTER COLUMN NotAnID INT NOT NULL;
ALTER TABLE MyTable DROP COLUMN ID;
EXECUTE sp_rename 'MyTable.NotAnID', 'ID';
COMMIT;
This assumes your identity column is NOT NULL (as it usually is), that ID is not the primary key, that it isn't participating in foreign key constraints, and that you want the new column to take place of the old one.
If ID is the primary key, this exercise gets more involved because you need to drop the primary key constraint and recreate it -- which has its own challenges. Doubly so if it's also the clustered index. In this case, you are probably better off recreating the table anyway, because recreating the clustered index means the whole table is rewritten -- this will almost certainly interrupt production work, so you may as well let SSMS do the tough work for you. To allow that, go to Tools -> Options -> Designers and uncheck "Prevent saving changes that require table re-creation".

Detailed error message for violation of Primary Key constraint in sql2008?

I'm inserting a large amount of rows into an empty table with a primary key constraint on one column.
If there is a duplicate key error, is there any way to find out the value of the key (or row) that caused the error?
Validating the data prior to the insert is sadly not something I can do right now.
Using SQL 2008.
Thanks!
Doing the count(*) / group by thing is something I'm trying to avoid, this is an insert of hundreds of millions of rows from hundreds of different DB's (some of which are on remote servers)...I don't have the time or space to do the insert twice.
The data is supposed to be unique from the providers, but unfortunately their validation doesn't seem to work correctly 100% of the time and I'm trying to at least see where it's failing so I can help them troubleshoot.
Thank you!
There's not a way of doing it that won't slow your process down, but here's one way that will make it easier. You can add an instead-of trigger on that table for inserts and updates. The trigger will check each record before inserting it and make sure it won't cause a primary key violation. You can even create a second table to catch violations, and have a different primary key (like an identity field) on that one, and the trigger will insert the rows into your error-catching table.
Here's an example of how the trigger can work:
CREATE TRIGGER mytrigger ON sometable
INSTEAD OF INSERT
AS BEGIN
INSERT INTO sometable SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 1 FROM inserted;
INSERT INTO sometableRejects SELECT * FROM inserted WHERE ISNUMERIC(somefield) = 0 FROM inserted;
END
In that example, I'm checking a field to make sure it's numeric before I insert the data into the table. You'll need to modify that code to check for primary key violations instead - for example, you might join the INSERTED table to your own existing table and only insert rows where you don't find a match.
The solution would depend on how often this happens. If it's <10% of the time then I would do the following:
Insert the data
If error then do Bravax's revised solution (remove constraint, insert, find dup, report and kill dup, enable constraint).
This means it's only costing you on the few times an error occurs.
If this is happening more often then I'd look at sending the boys over to see the providers :-)
Revised:
Since you don't want to insert twice, could you:
Drop the primary key constraint.
Insert all data into the table
Find any duplicates, and remove them
Then re-add the primary key constraint
Previous reply:
Insert the data into a duplicate of the table without the primary key constraint.
Then run a query on it to determine rows which have duplicate values for the rpimary key column.
select count(*), <Primary Key>
from table
group by <Primary Key>
having count(*) > 1
Use SSIS to import the data and have it check for this as part of the data flow. That is the best way to handle. SSIS can send the bad records to a table (that you can later send to the vendor to help them clean up their act) and process the good ones.
I can't believe that SSIS does not easily address this "reality", because, let's face it, oftentimes you need and want to be able to:
See if a record exists with a certain unique or primary key
If it does not, insert it
If it does, either ignore it or update it.
I don't understand how they would let a product out the door without this capability built-in in an easy-to-use manner. Like, say, set an attribute of a component to automatically check this.

Resources