unique index is not enforced if IsActive column is false [duplicate] - database

I have a situation where i need to enforce a unique constraint on a set of columns, but only for one value of a column.
So for example I have a table like Table(ID, Name, RecordStatus).
RecordStatus can only have a value 1 or 2 (active or deleted), and I want to create a unique constraint on (ID, RecordStatus) only when RecordStatus = 1, since I don't care if there are multiple deleted records with the same ID.
Apart from writing triggers, can I do that?
I am using SQL Server 2005.

Behold, the filtered index. From the documentation (emphasis mine):
A filtered index is an optimized nonclustered index especially suited to cover queries that select from a well-defined subset of data. It uses a filter predicate to index a portion of rows in the table. A well-designed filtered index can improve query performance as well as reduce index maintenance and storage costs compared with full-table indexes.
And here's an example combining a unique index with a filter predicate:
create unique index MyIndex
on MyTable(ID)
where RecordStatus = 1;
This essentially enforces uniqueness of ID when RecordStatus is 1.
Following the creation of that index, a uniqueness violation will raise an arror:
Msg 2601, Level 14, State 1, Line 13
Cannot insert duplicate key row in object 'dbo.MyTable' with unique index 'MyIndex'. The duplicate key value is (9999).
Note: the filtered index was introduced in SQL Server 2008. For earlier versions of SQL Server, please see this answer.

Add a check constraint like this. The difference is, you'll return false if Status = 1 and Count > 0.
http://msdn.microsoft.com/en-us/library/ms188258.aspx
CREATE TABLE CheckConstraint
(
Id TINYINT,
Name VARCHAR(50),
RecordStatus TINYINT
)
GO
CREATE FUNCTION CheckActiveCount(
#Id INT
) RETURNS INT AS BEGIN
DECLARE #ret INT;
SELECT #ret = COUNT(*) FROM CheckConstraint WHERE Id = #Id AND RecordStatus = 1;
RETURN #ret;
END;
GO
ALTER TABLE CheckConstraint
ADD CONSTRAINT CheckActiveCountConstraint CHECK (NOT (dbo.CheckActiveCount(Id) > 1 AND RecordStatus = 1));
INSERT INTO CheckConstraint VALUES (1, 'No Problems', 2);
INSERT INTO CheckConstraint VALUES (1, 'No Problems', 2);
INSERT INTO CheckConstraint VALUES (1, 'No Problems', 2);
INSERT INTO CheckConstraint VALUES (1, 'No Problems', 1);
INSERT INTO CheckConstraint VALUES (2, 'Oh no!', 1);
INSERT INTO CheckConstraint VALUES (2, 'Oh no!', 2);
-- Msg 547, Level 16, State 0, Line 14
-- The INSERT statement conflicted with the CHECK constraint "CheckActiveCountConstraint". The conflict occurred in database "TestSchema", table "dbo.CheckConstraint".
INSERT INTO CheckConstraint VALUES (2, 'Oh no!', 1);
SELECT * FROM CheckConstraint;
-- Id Name RecordStatus
-- ---- ------------ ------------
-- 1 No Problems 2
-- 1 No Problems 2
-- 1 No Problems 2
-- 1 No Problems 1
-- 2 Oh no! 1
-- 2 Oh no! 2
ALTER TABLE CheckConstraint
DROP CONSTRAINT CheckActiveCountConstraint;
DROP FUNCTION CheckActiveCount;
DROP TABLE CheckConstraint;

You could move the deleted records to a table that lacks the constraint, and perhaps use a view with UNION of the two tables to preserve the appearance of a single table.

You can do this in a really hacky way...
Create an schemabound view on your table.
CREATE VIEW Whatever
SELECT * FROM Table
WHERE RecordStatus = 1
Now create a unique constraint on the view with the fields you want.
One note about schemabound views though, if you change the underlying tables you will have to recreate the view. Plenty of gotchas because of that.

For those still searching for a solution, I came accross a nice answer, to a similar question and I think this can be still useful for many. While moving deleted records to another table may be a better solution, for those who don't want to move the record can use the idea in the linked answer which is as follows.
Set deleted=0 when the record is available/active.
Set deleted=<row_id or some other unique value> when marking the row
as deleted.

If you can't use NULL as a RecordStatus as Bill's suggested, you could combine his idea with a function-based index. Create a function that returns NULL if the RecordStatus is not one of the values you want to consider in your constraint (and the RecordStatus otherwise) and create an index over that.
That'll have the advantage that you don't have to explicitly examine other rows in the table in your constraint, which could cause you performance issues.
I should say I don't know SQL server at all, but I have successfully used this approach in Oracle.

Because, you are going to allow duplicates, a unique constraint will not work. You can create a check constraint for RecordStatus column and a stored procedure for INSERT that checks the existing active records before inserting duplicate IDs.

Related

SQL Server check constraints - only one particular value per group [duplicate]

How could I set a constraint on a table so that only one of the records has its isDefault bit field set to 1?
The constraint is not table scope, but one default per set of rows, specified by a FormID.
Use a unique filtered index
On SQL Server 2008 or higher you can simply use a unique filtered index
CREATE UNIQUE INDEX IX_TableName_FormID_isDefault
ON TableName(FormID)
WHERE isDefault = 1
Where the table is
CREATE TABLE TableName(
FormID INT NOT NULL,
isDefault BIT NOT NULL
)
For example if you try to insert many rows with the same FormID and isDefault set to 1 you will have this error:
Cannot insert duplicate key row in object 'dbo.TableName' with unique
index 'IX_TableName_FormID_isDefault'. The duplicate key value is (1).
Source: http://technet.microsoft.com/en-us/library/cc280372.aspx
Here's a modification of Damien_The_Unbeliever's solution that allows one default per FormID.
CREATE VIEW form_defaults
AS
SELECT FormID
FROM whatever
WHERE isDefault = 1
GO
CREATE UNIQUE CLUSTERED INDEX ix_form_defaults on form_defaults (FormID)
GO
But the serious relational folks will tell you this information should just be in another table.
CREATE TABLE form
FormID int NOT NULL PRIMARY KEY
DefaultWhateverID int FOREIGN KEY REFERENCES Whatever(ID)
From a normalization perspective, this would be an inefficient way of storing a single fact.
I would opt to hold this information at a higher level, by storing (in a different table) a foreign key to the identifier of the row which is considered to be the default.
CREATE TABLE [dbo].[Foo](
[Id] [int] NOT NULL,
CONSTRAINT [PK_Foo] PRIMARY KEY CLUSTERED
(
[Id] ASC
) ON [PRIMARY]
) ON [PRIMARY]
GO
CREATE TABLE [dbo].[DefaultSettings](
[DefaultFoo] [int] NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[DefaultSettings] WITH CHECK ADD CONSTRAINT [FK_DefaultSettings_Foo] FOREIGN KEY([DefaultFoo])
REFERENCES [dbo].[Foo] ([Id])
GO
ALTER TABLE [dbo].[DefaultSettings] CHECK CONSTRAINT [FK_DefaultSettings_Foo]
GO
You could use an insert/update trigger.
Within the trigger after an insert or update, if the count of rows with isDefault = 1 is more than 1, then rollback the transaction.
CREATE VIEW vOnlyOneDefault
AS
SELECT 1 as Lock
FROM <underlying table>
WHERE Default = 1
GO
CREATE UNIQUE CLUSTERED INDEX IX_vOnlyOneDefault on vOnlyOneDefault (Lock)
GO
You'll need to have the right ANSI settings turned on for this.
I don't know about SQLServer.But if it supports Function-Based Indexes like in Oracle, I hope this can be translated, if not, sorry.
You can do an index like this on suposed that default value is 1234, the column is DEFAULT_COLUMN and ID_COLUMN is the primary key:
CREATE
UNIQUE
INDEX only_one_default
ON my_table
( DECODE(DEFAULT_COLUMN, 1234, -1, ID_COLUMN) )
This DDL creates an unique index indexing -1 if the value of DEFAULT_COLUMN is 1234 and ID_COLUMN in any other case. Then, if two columns have DEFAULT_COLUMN value, it raises an exception.
The question implies to me that you have a primary table that has some child records and one of those child records will be the default record. Using address and a separate default table here is an example of how to make that happen using third normal form. Of course I don't know if it's valuable to answer something that is so old but it struck my fancy.
--drop table dev.defaultAddress;
--drop table dev.addresses;
--drop table dev.people;
CREATE TABLE [dev].[people](
[Id] [int] identity primary key,
name char(20)
)
GO
CREATE TABLE [dev].[Addresses](
id int identity primary key,
peopleId int foreign key references dev.people(id),
address varchar(100)
) ON [PRIMARY]
GO
CREATE TABLE [dev].[defaultAddress](
id int identity primary key,
peopleId int foreign key references dev.people(id),
addressesId int foreign key references dev.addresses(id))
go
create unique index defaultAddress on dev.defaultAddress (peopleId)
go
create unique index idx_addr_id_person on dev.addresses(peopleid,id);
go
ALTER TABLE dev.defaultAddress
ADD CONSTRAINT FK_Def_People_Address
FOREIGN KEY(peopleID, addressesID)
REFERENCES dev.Addresses(peopleId, id)
go
insert into dev.people (name)
select 'Bill' union
select 'John' union
select 'Harry'
insert into dev.Addresses (peopleid, address)
select 1, '123 someplace' union
select 1,'work place' union
select 2,'home address' union
select 3,'some address'
insert into dev.defaultaddress (peopleId, addressesid)
select 1,1 union
select 2,3
-- so two home addresses are default now
-- try adding another default address to Bill and you get an error
select * from dev.people
join dev.addresses on people.id = addresses.peopleid
left join dev.defaultAddress on defaultAddress.peopleid = people.id and defaultaddress.addressesid = addresses.id
insert into dev.defaultaddress (peopleId, addressesId)
select 1,2
GO
You could do it through an instead of trigger, or if you want it as a constraint create a constraint that references a function that checks for a row that has the default set to 1
EDIT oops, needs to be <=
Create table mytable(id1 int, defaultX bit not null default(0))
go
create Function dbo.fx_DefaultExists()
returns int as
Begin
Declare #Ret int
Set #ret = 0
Select #ret = count(1) from mytable
Where defaultX = 1
Return #ret
End
GO
Alter table mytable add
CONSTRAINT [CHK_DEFAULT_SET] CHECK
(([dbo].fx_DefaultExists()<=(1)))
GO
Insert into mytable (id1, defaultX) values (1,1)
Insert into mytable (id1, defaultX) values (2,1)
This is a fairly complex process that cannot be handled through a simple constraint.
We do this through a trigger. However before you write the trigger you need to be able to answer several things:
do we want to fail the insert if a default exists, change it to 0 instead of 1 or change the existing default to 0 and leave this one as 1?
what do we want to do if the default record is deleted and other non default records are still there? Do we make one the default, if so how do we determine which one?
You will also need to be very, very careful to make the trigger handle multiple row processing. For instance a client might decide that all of the records of a particular type should be the default. You wouldn't change a million records one at a time, so this trigger needs to be able to handle that. It also needs to handle that without looping or the use of a cursor (you really don't want the type of transaction discussed above to take hours locking up the table the whole time).
You also need a very extensive tesing scenario for this trigger before it goes live. You need to test:
adding a record with no default and it is the first record for that customer
adding a record with a default and it is the first record for that customer
adding a record with no default and it is the not the first record for that customer
adding a record with a default and it is the not the first record for that customer
Updating a record to have the default when no other record has it (assuming you don't require one record to always be set as the deafault)
Updating a record to remove the default
Deleting the record with the deafult
Deleting a record without the default
Performing a mass insert with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record inserts
Performing a mass update with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record updates
Performing a mass delete with multiple situations in the data including two records which both have isdefault set to 1 and all of the situations tested when running individual record deletes
#Andy Jones gave an answer above closest to mine, but bearing in mind the Rule of Three, I placed the logic directly in the stored proc that updates this table. This was my simple solution. If I need to update the table from elsewhere, I will move the logic to a trigger. The one default rule applies to each set of records specified by a FormID and a ConfigID:
ALTER proc [dbo].[cpForm_UpdateLinkedReport]
#reportLinkId int,
#defaultYN bit,
#linkName nvarchar(150)
as
if #defaultYN = 1
begin
declare #formId int, #configId int
select #formId = FormID, #configId = ConfigID from csReportLink where ReportLinkID = #reportLinkId
update csReportLink set DefaultYN = 0 where isnull(ConfigID, #configId) = #configId and FormID = #formId
end
update
csReportLink
set
DefaultYN = #defaultYN,
LinkName = #linkName
where
ReportLinkID = #reportLinkId

Ensuring relationship integrity in a database modelling sets and subsets

I have a model that consists in these 3 tables (among others):
Item
PK id_item
Set
PK id_set
Subset
PK id_subset
Each Item MUST belong to one and just one Set (1..N)
You can define zero or more Subsets for each Set (0..N)
Each Item belongs to zero or one subset (0..N)
Ive modelled the database adding the following FK:
Item
PK id_item
FK id_set
FK id_subset
Set
PK id_set
Subset
PK id_subset
FK id_set
I cannot find a way to forbid the database to accept Items belonging to one Set (A) and to a Subset (B2) that belongs to a different Set (B).
Is there anyway to do so? Or is this just a bad design/modelling?
This is a SQL Server 2008 database
First, if an Item can belong to a subset, you must add a foreign key between the Item table and the subset table.
Second, add a check constraint on the Item table that will make sure that if the subset_id does not belong in the set_id, will raise an exception.
To do that, first you create a user defied function to test the values:
CREATE FUNCTION udf_CheckSubSet
(
#id_set int,
#id_subset int
)
RETURNS int
AS
BEGIN
IF #id_subset IS NULL OR EXISTS (
SELECT 1
FROM Subset
WHERE id_subset = #id_subset
AND id_set = #id_set
)
BEGIN
RETURN 1
END
-- logical else
RETURN 0
END
then you create the check constraint:
ALTER TABLE Item
ADD CONSTRAINT cc_Item_subset CHECK (dbo.udf_CheckSubSet(id_set, id_subset) = 1);
However, I also suggest to create a stored procedure to insert the item, and test inside the stored procedure before inserting the item.
The reason for this is that it's much more expensive (performance-wise) to handle exceptions then to simply test the input before inserting it to the table.
you might be wondering why do you even need the check constraint, if you already handle the problem with the stored procedure. The answer to this question is that the check constraint will not allow inserting updating the data in the table even if someone tries to do it directly from SSMS, or just write an insert or update statement.
Disclaimer: while it is possible to implement this kind of constraint using database schema alone, I strongly advise you against using the approach explained below in any real life project.
Academically speaking, in order to do what you want you have to migrate the identifying key from Set via both Set and Subset foreign keys. The schema will look like this:
use master;
go
if db_id('SampleDB') is not null
set noexec on;
go
create database SampleDB;
go
use SampleDB;
go
/*==============================================================*/
/* Table: Sets */
/*==============================================================*/
create table dbo.[Sets] (
[Id] int not null,
[Name] varchar(50) not null,
constraint [PK_Sets] primary key (Id)
)
go
/*==============================================================*/
/* Table: SubSets */
/*==============================================================*/
create table dbo.[SubSets] (
[SetId] int not null,
[SubsetId] int not null,
[Name] varchar(50) not null,
constraint [PK_SubSets] primary key (SetId, SubsetId)
)
go
alter table dbo.SubSets
add constraint FK_SubSets_Sets_SetId foreign key (SetId)
references dbo.Sets (Id)
go
/*==============================================================*/
/* Table: Items */
/*==============================================================*/
create table dbo.[Items] (
[Id] int not null,
[SetId] int not null,
[SubsetId] int null,
[Name] varchar(50) not null,
constraint [PK_Items] primary key (Id)
)
go
alter table dbo.Items
add constraint FK_Items_Sets_SetId foreign key (SetId)
references dbo.Sets (Id)
go
alter table dbo.Items
add constraint FK_Items_SubSets_SetIdSubsetId foreign key (SetId, SubsetId)
references dbo.SubSets (SetId, SubsetId)
go
set noexec off;
go
use master;
go
As you can see, the PK on the dbo.Subset table is somewhat lame. It serves its purpose, of course, but it could have been made simpler. Another unusual thing is that SubsetId column in dbo.Items table participates in 2 foreign keys that point to different tables.
You can insert some data into this schema, and it will be perfectly fine:
insert into dbo.Sets (Id, Name)
values
(1, 'Set 1'),
(2, 'Set 2');
go
insert into dbo.SubSets (SetId, SubsetId, Name)
values
(1, 1, 'Subset 1-1'),
(1, 2, 'Subset 1-2');
go
insert into dbo.Items (Id, SetId, SubsetId, Name)
values
(1, 1, 1, 'Banana'),
(2, 1, 1, 'Plate'),
(3, 1, 2, 'Charger'),
(4, 1, null, 'Toothpick'),
(5, 2, null, 'Cup');
And you will be hit with FK constraint violation when you try to add contradictory data, such as this:
insert into dbo.Items (Id, SetId, SubsetId, Name)
values
(6, 2, 1, 'Fake t-shirt');
The subset 1 does not belong to the set 2, so the command above will not succeed.
Now - why you should never use this design approach, unless being forced to do so at the gunpoint:
Not every business constraint can and should be implemented on the
schema level. Actually, writing it down in the stored procedure will
be easier to understand, maintain and work with, in most cases;
It contains rarely used tricks which are very confusing and
unexpected for most people, even seasoned database professionals. All
of this add up to the cost of maintenance;
Last but not least - queries that will work correctly with this kind
of schema will be, how shall I put this, awkward and difficult to write. Also, you will most probably encounter a lot of problems it you will try to combine this schema with any kind of ORM. Or maybe not; or maybe they will only manifest themselves once being put in production, etc.

Check constraint UDF with multiple input parameters not working

I'm trying to implement a check constraint on a table such that records can't be inserted where there exists a record for which two of the columns ("Int_1" and "Int_2") already have the value we're trying to insert E.g.:
ID Name Int_1 Int_2
1 Dave 1 2
Inserting (2, Steve, 2, 2) into the table above would be okay, as would (3, Mike, 1, 3), but inserting values where Int_1 AND Int_2 already exist is not allowed, i.e. (4, Stuart, 1, 2) is illegal.
I thought defining my table thus would work:
CREATE TABLE [Table](
[ID] [int] IDENTITY(1,1) NOT NULL,
[Name] [varchar](255) NOT NULL,
[Int_1] [int] NOT NULL,
[Int_2] [int] NOT NULL,
CONSTRAINT [chk_Stuff] CHECK (dbo.chk_Ints(Int_1, Int_2)=1))
where: dbo.chk_Ints is defined:
CREATE FUNCTION [dbo].[chk_Ints](#Int_1 int,#Int_2 int)
RETURNS int
AS
BEGIN
DECLARE #Result int
IF NOT EXISTS (SELECT * FROM [Table] WHERE Int_1 = #Int_1 AND Int_2 = #Int_2)
BEGIN
SET #Result = 1
END
ELSE
BEGIN
SET #Result = 0
END
RETURN #Result
END
GO
When using the combo above, if I try to insert any record whatsoever, SQL tells me I've broken my check constraint. I can remove all rows from the table and try to insert a first record, and SQL tells me I've broken my constraint, which I can't possibly have done!
I've scoured the internet for quite a while now looking for examples of check constraints where the UDF depends on multiple table columns, but to no avail. Any ideas as to why this might not work?
Thanks in advance :)
Yes, this may seem baffling until you realise what's going on, at which point it becomes quite obvious.
The function is called for the values that are in the row you are trying to insert. But think of how the function is being called. It is a check constraint that calls it.
Next, think of the parameters being passed. Where do they come from? According to the definition, the check constraint takes them from columns Int_1 and Int_2.
So, it passes them as column values. But column values must belong to a row. Which row is it in this case? The one you are trying to insert!
That means your row is inserted at this point, only the transaction is still pending. And yet the fact that the row is in the table is crucial, because that's what the function finds and reports on with the 1 result.
Thus, what's happening is this:
you are trying to insert a row,
the function sees that row and says that a row with the given parameters already exists,
the check constraint "reacts" accordingly by prohibiting the insert,
the insert is rolled back.
Of course, now that you realise all that, it is easy to come up with a different logic of checking for duplicates. Basically, your function should "keep in mind" that the new row is already in the table, and so it should try and determine whether its presence in the table violates any rules that you want to establish. You could, for instance, count the rows matching the given parameters and see if the result is not greater than 1:
IF (SELECT COUNT(*) FROM [Table] WHERE Int_1 = #Int_1 AND Int_2 = #Int_2) < 2
BEGIN
SET #Result = 1
END
ELSE
BEGIN
SET #Result = 0
END
However, the entire idea of using a function in a check constraint for this job is very much inferior to just adding a unique constraint on the two columns, as suggested by #a_horse_with_no_name. Do this:
ALTER TABLE [Table]
ADD CONSTRAINT UQ_Table_Int1_Int2 UNIQUE (Int_1, Int_2);
and you can forget about duplicates.

SQL Server, How to set auto increment after creating a table without data loss?

I have a table table1 in SQL server 2008 and it has records in it.
I want the primary key table1_Sno column to be an auto-incrementing column. Can this be done without any data transfer or cloning of table?
I know that I can use ALTER TABLE to add an auto-increment column, but can I simply add the AUTO_INCREMENT option to an existing column that is the primary key?
Changing the IDENTITY property is really a metadata only change. But to update the metadata directly requires starting the instance in single user mode and messing around with some columns in sys.syscolpars and is undocumented/unsupported and not something I would recommend or will give any additional details about.
For people coming across this answer on SQL Server 2012+ by far the easiest way of achieving this result of an auto incrementing column would be to create a SEQUENCE object and set the next value for seq as the column default.
Alternatively, or for previous versions (from 2005 onwards), the workaround posted on this connect item shows a completely supported way of doing this without any need for size of data operations using ALTER TABLE...SWITCH. Also blogged about on MSDN here. Though the code to achieve this is not very simple and there are restrictions - such as the table being changed can't be the target of a foreign key constraint.
Example code.
Set up test table with no identity column.
CREATE TABLE dbo.tblFoo
(
bar INT PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
INSERT INTO dbo.tblFoo (bar)
SELECT TOP (10000) ROW_NUMBER() OVER (ORDER BY (SELECT 0))
FROM master..spt_values v1, master..spt_values v2
Alter it to have an identity column (more or less instant).
BEGIN TRY;
BEGIN TRANSACTION;
/*Using DBCC CHECKIDENT('dbo.tblFoo') is slow so use dynamic SQL to
set the correct seed in the table definition instead*/
DECLARE #TableScript nvarchar(max)
SELECT #TableScript =
'
CREATE TABLE dbo.Destination(
bar INT IDENTITY(' +
CAST(ISNULL(MAX(bar),0)+1 AS VARCHAR) + ',1) PRIMARY KEY,
filler CHAR(8000),
filler2 CHAR(49)
)
ALTER TABLE dbo.tblFoo SWITCH TO dbo.Destination;
'
FROM dbo.tblFoo
WITH (TABLOCKX,HOLDLOCK)
EXEC(#TableScript)
DROP TABLE dbo.tblFoo;
EXECUTE sp_rename N'dbo.Destination', N'tblFoo', 'OBJECT';
COMMIT TRANSACTION;
END TRY
BEGIN CATCH
IF XACT_STATE() <> 0 ROLLBACK TRANSACTION;
PRINT ERROR_MESSAGE();
END CATCH;
Test the result.
INSERT INTO dbo.tblFoo (filler,filler2)
OUTPUT inserted.*
VALUES ('foo','bar')
Gives
bar filler filler2
----------- --------- ---------
10001 foo bar
Clean up
DROP TABLE dbo.tblFoo
SQL Server: How to set auto-increment on a table with rows in it:
This strategy physically copies the rows around twice which can take a much longer time if the table you are copying is very large.
You could save out your data, drop and rebuild the table with the auto-increment and primary key, then load the data back in.
I'll walk you through with an example:
Step 1, create table foobar (without primary key or auto-increment):
CREATE TABLE foobar(
id int NOT NULL,
name nchar(100) NOT NULL,
)
Step 2, insert some rows
insert into foobar values(1, 'one');
insert into foobar values(2, 'two');
insert into foobar values(3, 'three');
Step 3, copy out foobar data into a temp table:
select * into temp_foobar from foobar
Step 4, drop table foobar:
drop table foobar;
Step 5, recreate your table with the primary key and auto-increment properties:
CREATE TABLE foobar(
id int primary key IDENTITY(1, 1) NOT NULL,
name nchar(100) NOT NULL,
)
Step 6, insert your data from temp table back into foobar
SET IDENTITY_INSERT temp_foobar ON
INSERT into foobar (id, name) select id, name from temp_foobar;
Step 7, drop your temp table, and check to see if it worked:
drop table temp_foobar;
select * from foobar;
You should get this, and when you inspect the foobar table, the id column is auto-increment of 1 and id is a primary key:
1 one
2 two
3 three
If you want to do this via the designer you can do it by following the instructions here "Save changes is not permitted" when changing an existing column to be nullable
Yes, you can. Go to Tools > Designers > Table and Designers and uncheck "Prevent Saving Changes That Prevent Table Recreation".
No, you can not add an auto increment option to an existing column with data, I think the option which you mentioned is the best.
Have a look here.
If you don't want to add a new column, and you can guarantee that your current int column is unique, you could select all of the data out into a temporary table, drop the table and recreate with the IDENTITY column specified. Then using SET IDENTITY INSERT ON you can insert all of your data in the temporary table into the new table.
Below script can be a good solution.Worked in large data as well.
ALTER DATABASE WMlive SET RECOVERY SIMPLE WITH NO_WAIT
ALTER TABLE WMBOMTABLE DROP CONSTRAINT PK_WMBomTable
ALTER TABLE WMBOMTABLE drop column BOMID
ALTER TABLE WMBOMTABLE ADD BomID int IDENTITY(1, 1) NOT NULL;
ALTER TABLE WMBOMTABLE ADD CONSTRAINT PK_WMBomTable PRIMARY KEY CLUSTERED (BomID);
ALTER DATABASE WMlive SET RECOVERY FULL WITH NO_WAIT

How do I create a unique constraint that also allows nulls?

I want to have a unique constraint on a column which I am going to populate with GUIDs. However, my data contains null values for this columns. How do I create the constraint that allows multiple null values?
Here's an example scenario. Consider this schema:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
LibraryCardId UNIQUEIDENTIFIER NULL,
CONSTRAINT UQ_People_LibraryCardId UNIQUE (LibraryCardId)
)
Then see this code for what I'm trying to achieve:
-- This works fine:
INSERT INTO People (Name, LibraryCardId)
VALUES ('John Doe', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This also works fine, obviously:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marie Doe', 'BBBBBBBB-BBBB-BBBB-BBBB-BBBBBBBBBBBB');
-- This would *correctly* fail:
--INSERT INTO People (Name, LibraryCardId)
--VALUES ('John Doe the Second', 'AAAAAAAA-AAAA-AAAA-AAAA-AAAAAAAAAAAA');
-- This works fine this one first time:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Richard Roe', NULL);
-- THE PROBLEM: This fails even though I'd like to be able to do this:
INSERT INTO People (Name, LibraryCardId)
VALUES ('Marcus Roe', NULL);
The final statement fails with a message:
Violation of UNIQUE KEY constraint 'UQ_People_LibraryCardId'. Cannot insert duplicate key in object 'dbo.People'.
How can I change my schema and/or uniqueness constraint so that it allows multiple NULL values, while still checking for uniqueness on actual data?
What you're looking for is indeed part of the ANSI standards SQL:92, SQL:1999 and SQL:2003, ie a UNIQUE constraint must disallow duplicate non-NULL values but accept multiple NULL values.
In the Microsoft world of SQL Server however, a single NULL is allowed but multiple NULLs are not...
In SQL Server 2008, you can define a unique filtered index based on a predicate that excludes NULLs:
CREATE UNIQUE NONCLUSTERED INDEX idx_yourcolumn_notnull
ON YourTable(yourcolumn)
WHERE yourcolumn IS NOT NULL;
In earlier versions, you can resort to VIEWS with a NOT NULL predicate to enforce the constraint.
SQL Server 2008 +
You can create a unique index that accept multiple NULLs with a WHERE clause. See the answer below.
Prior to SQL Server 2008
You cannot create a UNIQUE constraint and allow NULLs. You need set a default value of NEWID().
Update the existing values to NEWID() where NULL before creating the UNIQUE constraint.
SQL Server 2008 And Up
Just filter a unique index:
CREATE UNIQUE NONCLUSTERED INDEX UQ_Party_SamAccountName
ON dbo.Party(SamAccountName)
WHERE SamAccountName IS NOT NULL;
In Lower Versions, A Materialized View Is Still Not Required
For SQL Server 2005 and earlier, you can do it without a view. I just added a unique constraint like you're asking for to one of my tables. Given that I want uniqueness in column SamAccountName, but I want to allow multiple NULLs, I used a materialized column rather than a materialized view:
ALTER TABLE dbo.Party ADD SamAccountNameUnique
AS (Coalesce(SamAccountName, Convert(varchar(11), PartyID)))
ALTER TABLE dbo.Party ADD CONSTRAINT UQ_Party_SamAccountName
UNIQUE (SamAccountNameUnique)
You simply have to put something in the computed column that will be guaranteed unique across the whole table when the actual desired unique column is NULL. In this case, PartyID is an identity column and being numeric will never match any SamAccountName, so it worked for me. You can try your own method—be sure you understand the domain of your data so that there is no possibility of intersection with real data. That could be as simple as prepending a differentiator character like this:
Coalesce('n' + SamAccountName, 'p' + Convert(varchar(11), PartyID))
Even if PartyID became non-numeric someday and could coincide with a SamAccountName, now it won't matter.
Note that the presence of an index including the computed column implicitly causes each expression result to be saved to disk with the other data in the table, which DOES take additional disk space.
Note that if you don't want an index, you can still save CPU by making the expression be precalculated to disk by adding the keyword PERSISTED to the end of the column expression definition.
In SQL Server 2008 and up, definitely use the filtered solution instead if you possibly can!
Controversy
Please note that some database professionals will see this as a case of "surrogate NULLs", which definitely have problems (mostly due to issues around trying to determine when something is a real value or a surrogate value for missing data; there can also be issues with the number of non-NULL surrogate values multiplying like crazy).
However, I believe this case is different. The computed column I'm adding will never be used to determine anything. It has no meaning of itself, and encodes no information that isn't already found separately in other, properly defined columns. It should never be selected or used.
So, my story is that this is not a surrogate NULL, and I'm sticking to it! Since we don't actually want the non-NULL value for any purpose other than to trick the UNIQUE index to ignore NULLs, our use case has none of the problems that arise with normal surrogate NULL creation.
All that said, I have no problem with using an indexed view instead—but it brings some issues with it such as the requirement of using SCHEMABINDING. Have fun adding a new column to your base table (you'll at minimum have to drop the index, and then drop the view or alter the view to not be schema bound). See the full (long) list of requirements for creating an indexed view in SQL Server (2005) (also later versions), (2000).
Update
If your column is numeric, there may be the challenge of ensuring that the unique constraint using Coalesce does not result in collisions. In that case, there are some options. One might be to use a negative number, to put the "surrogate NULLs" only in the negative range, and the "real values" only in the positive range. Alternately, the following pattern could be used. In table Issue (where IssueID is the PRIMARY KEY), there may or may not be a TicketID, but if there is one, it must be unique.
ALTER TABLE dbo.Issue ADD TicketUnique
AS (CASE WHEN TicketID IS NULL THEN IssueID END);
ALTER TABLE dbo.Issue ADD CONSTRAINT UQ_Issue_Ticket_AllowNull
UNIQUE (TicketID, TicketUnique);
If IssueID 1 has ticket 123, the UNIQUE constraint will be on values (123, NULL). If IssueID 2 has no ticket, it will be on (NULL, 2). Some thought will show that this constraint cannot be duplicated for any row in the table, and still allows multiple NULLs.
For people who are using Microsoft SQL Server Manager and want to create a Unique but Nullable index you can create your unique index as you normally would then in your Index Properties for your new index, select "Filter" from the left hand panel, then enter your filter (which is your where clause). It should read something like this:
([YourColumnName] IS NOT NULL)
This works with MSSQL 2012
When I applied the unique index below:
CREATE UNIQUE NONCLUSTERED INDEX idx_badgeid_notnull
ON employee(badgeid)
WHERE badgeid IS NOT NULL;
every non null update and insert failed with the error below:
UPDATE failed because the following SET options have incorrect settings: 'ARITHABORT'.
I found this on MSDN
SET ARITHABORT must be ON when you are creating or changing indexes on computed columns or indexed views. If SET ARITHABORT is OFF, CREATE, UPDATE, INSERT, and DELETE statements on tables with indexes on computed columns or indexed views will fail.
So to get this to work correctly I did this
Right click [Database]-->Properties-->Options-->Other
Options-->Misscellaneous-->Arithmetic Abort Enabled -->true
I believe it is possible to set this option in code using
ALTER DATABASE "DBNAME" SET ARITHABORT ON
but i have not tested this
It can be done in the designer as well
Right click on the Index > Properties to get this window
Create a view that selects only non-NULL columns and create the UNIQUE INDEX on the view:
CREATE VIEW myview
AS
SELECT *
FROM mytable
WHERE mycolumn IS NOT NULL
CREATE UNIQUE INDEX ux_myview_mycolumn ON myview (mycolumn)
Note that you'll need to perform INSERT's and UPDATE's on the view instead of table.
You may do it with an INSTEAD OF trigger:
CREATE TRIGGER trg_mytable_insert ON mytable
INSTEAD OF INSERT
AS
BEGIN
INSERT
INTO myview
SELECT *
FROM inserted
END
It is possible to create a unique constraint on a Clustered Indexed View
You can create the View like this:
CREATE VIEW dbo.VIEW_OfYourTable WITH SCHEMABINDING AS
SELECT YourUniqueColumnWithNullValues FROM dbo.YourTable
WHERE YourUniqueColumnWithNullValues IS NOT NULL;
and the unique constraint like this:
CREATE UNIQUE CLUSTERED INDEX UIX_VIEW_OFYOURTABLE
ON dbo.VIEW_OfYourTable(YourUniqueColumnWithNullValues)
In my experience - if you're thinking a column needs to allow NULLs but also needs to be UNIQUE for values where they exist, you may be modelling the data incorrectly. This often suggests you're creating a separate sub-entity within the same table as a different entity. It probably makes more sense to have this entity in a second table.
In the provided example, I would put LibraryCardId in a separate LibraryCards table with a unique not-null foreign key to the People table:
CREATE TABLE People (
Id INT CONSTRAINT PK_MyTable PRIMARY KEY IDENTITY,
Name NVARCHAR(250) NOT NULL,
)
CREATE TABLE LibraryCards (
LibraryCardId UNIQUEIDENTIFIER CONSTRAINT PK_LibraryCards PRIMARY KEY,
PersonId INT NOT NULL
CONSTRAINT UQ_LibraryCardId_PersonId UNIQUE (PersonId),
FOREIGN KEY (PersonId) REFERENCES People(id)
)
This way you don't need to bother with a column being both unique and nullable. If a person doesn't have a library card, they just won't have a record in the library cards table. Also, if there are additional attributes about the library card (perhaps Expiration Date or something), you now have a logical place to put those fields.
Maybe consider an "INSTEAD OF" trigger and do the check yourself? With a non-clustered (non-unique) index on the column to enable the lookup.
As stated before, SQL Server doesn't implement the ANSI standard when it comes to UNIQUE CONSTRAINT. There is a ticket on Microsoft Connect for this since 2007. As suggested there and here the best options as of today are to use a filtered index as stated in another answer or a computed column, e.g.:
CREATE TABLE [Orders] (
[OrderId] INT IDENTITY(1,1) NOT NULL,
[TrackingId] varchar(11) NULL,
...
[ComputedUniqueTrackingId] AS (
CASE WHEN [TrackingId] IS NULL
THEN '#' + cast([OrderId] as varchar(12))
ELSE [TrackingId_Unique] END
),
CONSTRAINT [UQ_TrackingId] UNIQUE ([ComputedUniqueTrackingId])
)
You can create an INSTEAD OF trigger to check for specific conditions and error if they are met. Creating an index can be costly on larger tables.
Here's an example:
CREATE TRIGGER PONY.trg_pony_unique_name ON PONY.tbl_pony
INSTEAD OF INSERT, UPDATE
AS
BEGIN
IF EXISTS(
SELECT TOP (1) 1
FROM inserted i
GROUP BY i.pony_name
HAVING COUNT(1) > 1
)
OR EXISTS(
SELECT TOP (1) 1
FROM PONY.tbl_pony t
INNER JOIN inserted i
ON i.pony_name = t.pony_name
)
THROW 911911, 'A pony must have a name as unique as s/he is. --PAS', 16;
ELSE
INSERT INTO PONY.tbl_pony (pony_name, stable_id, pet_human_id)
SELECT pony_name, stable_id, pet_human_id
FROM inserted
END
You can't do this with a UNIQUE constraint, but you can do this in a trigger.
CREATE TRIGGER [dbo].[OnInsertMyTableTrigger]
ON [dbo].[MyTable]
INSTEAD OF INSERT
AS
BEGIN
SET NOCOUNT ON;
DECLARE #Column1 INT;
DECLARE #Column2 INT; -- allow nulls on this column
SELECT #Column1=Column1, #Column2=Column2 FROM inserted;
-- Check if an existing record already exists, if not allow the insert.
IF NOT EXISTS(SELECT * FROM dbo.MyTable WHERE Column1=#Column1 AND Column2=#Column2 #Column2 IS NOT NULL)
BEGIN
INSERT INTO dbo.MyTable (Column1, Column2)
SELECT #Column2, #Column2;
END
ELSE
BEGIN
RAISERROR('The unique constraint applies on Column1 %d, AND Column2 %d, unless Column2 is NULL.', 16, 1, #Column1, #Column2);
ROLLBACK TRANSACTION;
END
END
CREATE UNIQUE NONCLUSTERED INDEX [UIX_COLUMN_NAME]
ON [dbo].[Employee]([Username] ASC) WHERE ([Username] IS NOT NULL)
WITH (ALLOW_PAGE_LOCKS = ON, ALLOW_ROW_LOCKS = ON, PAD_INDEX = OFF, SORT_IN_TEMPDB = OFF,
DROP_EXISTING = OFF, IGNORE_DUP_KEY = OFF, STATISTICS_NORECOMPUTE = OFF, ONLINE = OFF,
MAXDOP = 0) ON [PRIMARY];
this code if u make a register form with textBox and use insert and ur textBox is empty and u click on submit button .
CREATE UNIQUE NONCLUSTERED INDEX [IX_tableName_Column]
ON [dbo].[tableName]([columnName] ASC) WHERE [columnName] !=`''`;

Resources