T-SQL logic for roll up and group by - sql-server

I have a question to collapse or roll up data based on the logic below.
How can I implement it?
The logic that allows episodes to be condensed into a single continuous care episode is a discharge code of 22 followed by an admission code of 4 on the same day.
continuous care implementation update
EPN--is a business_key.
episode_continuous_care_key is an artificial key that can be a row number function.
Below is the table structure.
drop table #source
CREATE TABLE #source(patidid varchar(20),epn int,preadmitdate datetime,adminttime varchar(10),
admitcode varchar(10),datedischarge datetime,disctime varchar(10),disccode varchar(10))
INSERT INTO #source VALUES
(1849,1,'4/23/2020','7:29',1,'7/31/2020','9:03',22)
,(1849,2,'7/31/2020','11:00',4,'7/31/2020','12:09',22)
,(1849,3,'7/31/2020','13:10',4,'8/24/2020','10:36',10)
,(1849,4,'8/26/2020','12:25',2,null,null,null)
,(1850,1,'4/23/2020','7:33',1,'6/29/2020','7:30',22)
,(1850,2,'6/29/2020','9:35',4,'7/8/2020','10:51',7)
,(1850,3,'7/10/2020','11:51',3,'7/29/2020','9:12',7)
,(1850,4,'7/31/2020','11:00',2,'8/6/2020','10:24',22)
,(1850,5,'8/6/2020','12:26',4,null,null,null)
,(1851,1,'4/23/2020','7:35',1,'6/24/2020','13:45',22)
,(1851,2,'6/24/2020','15:06',4,'9/24/2020','15:00',2)
,(1851,3,'12/4/2020','8:59',0,null,null,null)
,(1852,1,'4/23/2020','7:37',1,'7/6/2020','11:15',20)
,(1852,2,'7/8/2020','10:56',0,'7/10/2020','11:46',2)
,(1852,3,'7/10/2020','11:47',2,'7/28/2020','13:16',22)
,(1852,4,'7/28/2020','15:17',4,'8/4/2020','11:37',22)
,(1852,5,'8/4/2020','13:40',4,'11/18/2020','15:43',2)
,(1852,6,'12/2/2020','15:23',2,null,null,null)
,(1853,1,'4/23/2020','7:40',1,'7/1/2020','8:30',22)
,(1853,2,'7/1/2020','14:57',4,'12/4/2020','12:55',7)
,(1854,1,'4/23/2020','7:44',1,'7/31/2020','13:07',20)
,(1854,2,'8/3/2020','16:30',0,'8/5/2020','9:32',2)
,(1854,3,'8/5/2020','10:34',2,'8/24/2020','8:15',22)
,(1854,4,'8/24/2020','10:33',4,'12/4/2020','7:30',22)
,(1854,5,'12/4/2020','9:13',4,null,null,null)

That Excel sheet image says little about your database design so I invented my own version that more or less resembles your image. With a proper database design the first step of the solution should not be required...
Unpivot timestamp information so that admission timestamp and discharge timestamps become one column.
I used a common table expression Log1 for this action.
Use the codes to filter out the start of the continuous care periods. Those are the admissions, marked with Code.IsAdmission = 1 in my database design.
Also add the next period start as another column by using the lead() function.
These are all the actions from Log2.
Add a row number as continuous care key.
Using the next period start date, find the current continuous period end date with a cross apply.
Replace empty period end dates with the current date using the coalesce() function.
Calculate the difference as the continuous care period duration with the datediff() function.
Sample data
create table Codes
(
Code int,
Description nvarchar(50),
IsAdmission bit
);
insert into Codes (Code, Description, IsAdmission) values
( 1, 'First admission', 1),
( 2, 'Re-admission', 1),
( 4, 'Campus transfer IN', 0),
(10, 'Trial visit', 0),
(22, 'Campus transfer OUT', 0);
create table PatientLogs
(
PatientId int,
AdmitDateTime smalldatetime,
AdmitCode int,
DischargeDateTime smalldatetime,
DischargeCode int
);
insert into PatientLogs (PatientId, AdmitDateTime, AdmitCode, DischargeDateTime, DischargeCode) values
(1849, '2020-04-23 07:29', 1, '2020-07-31 09:03', 22),
(1849, '2020-07-31 11:00', 4, '2020-07-31 12:09', 22),
(1849, '2020-07-31 13:10', 4, '2020-08-24 10:36', 10),
(1849, '2020-08-26 12:25', 2, null, null);
Solution
with Log1 as
(
select updt.PatientId,
case updt.DateTimeType
when 'AdmitDateTime' then updt.AdmitCode
when 'DischargeDateTime' then updt.DischargeCode
end as Code,
updt.LogDateTime,
updt.DateTimeType
from PatientLogs pl
unpivot (LogDateTime for DateTimeType in (AdmitDateTime, DischargeDateTime)) updt
),
Log2 as (
select l.PatientId,
l.Code,
l.LogDateTime,
lead(l.LogDateTime) over(partition by l.PatientId order by l.LogDateTime) as LogDateTimeNext
from Log1 l
join Codes c
on c.Code = l.Code
where c.IsAdmission = 1
)
select la.PatientId,
row_number() over(partition by la.PatientId order by la.LogDateTime) as ContCareKey,
la.LogDateTime as AdmitDateTime,
coalesce(ld.LogDateTime, convert(smalldatetime, getdate())) as DischargeDateTime,
datediff(day, la.LogDateTime, coalesce(ld.LogDateTime, convert(smalldatetime, getdate()))) as ContStay
from Log2 la -- log admission
outer apply ( select top 1 l1.LogDateTime
from Log1 l1
where l1.PatientId = la.PatientId
and l1.LogDateTime < la.LogDateTimeNext
order by l1.LogDateTime desc ) ld -- log discharge
order by la.PatientId,
la.LogDateTime;
Result
PatientId ContCareKey AdmitDateTime DischargeDateTime ContStay
--------- ----------- ---------------- ----------------- --------
1849 1 2020-04-23 07:29 2020-08-24 10:36 123
1849 2 2020-08-26 12:25 2021-02-03 12:49 161
Fiddle to see things in action with intermediate results.

Here is a T-SQL solution that contains primary and foreign key relationships.
To make it a bit more realistic, I added a simple "Patient" table.
I put all your "codes" into a single table which should make it easier to manage the codes.
I do not understand the purpose of your concept of "continuous care" so I just added an "is first" binary column to the Admission table.
You might also consider adding something about the medical condition for which the patient is being treated.
CREATE SCHEMA Codes
GO
GO
CREATE TABLE dbo.Code
(
codeNr int NOT NULL,
description nvarchar(50),
CONSTRAINT Code_PK PRIMARY KEY(codeNr)
)
GO
CREATE TABLE dbo.Patient
(
patientNr int NOT NULL,
birthDate date NOT NULL,
firstName nvarchar(max) NOT NULL,
lastName nvarchar(max) NOT NULL,
CONSTRAINT Patient_PK PRIMARY KEY(patientNr)
)
GO
CREATE TABLE dbo.Admission
(
admitDateTime time NOT NULL,
patientNr int NOT NULL,
admitCode int,
isFirst bit,
CONSTRAINT Admission_PK PRIMARY KEY(patientNr, admitDateTime)
)
GO
CREATE TABLE dbo.Discharge
(
dischargeDateTime time NOT NULL,
patientNr int NOT NULL,
dischargeCode int NOT NULL,
CONSTRAINT Discharge_PK PRIMARY KEY(patientNr, dischargeDateTime)
)
GO
ALTER TABLE dbo.Admission ADD CONSTRAINT Admission_FK1 FOREIGN KEY (patientNr) REFERENCES dbo.Patient (patientNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Admission ADD CONSTRAINT Admission_FK2 FOREIGN KEY (admitCode) REFERENCES dbo.Code (codeNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Discharge ADD CONSTRAINT Discharge_FK1 FOREIGN KEY (patientNr) REFERENCES dbo.Patient (patientNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
ALTER TABLE dbo.Discharge ADD CONSTRAINT Discharge_FK2 FOREIGN KEY (dischargeCode) REFERENCES dbo.Code (codeNr) ON DELETE NO ACTION ON UPDATE NO ACTION
GO
GO

Related

Incrementing revision numbers in table's composite key

I'm running SQL Server 2014 locally for a database that will be deployed to an Azure SQL V12 database.
I have a table that stores values of extensible properties for a business-entity object, in this case the three tables look like this:
CREATE TABLE Widgets (
WidgetId bigint IDENTITY(1,1),
...
)
CREATE TABLE WidgetProperties (
PropertyId int IDENTITY(1,1),
Name nvarchar(50)
Type int -- 0 = int, 1 = string, 2 = date, etc
)
CREATE TABLE WidgetPropertyValues (
WidgetId bigint,
PropertyId int,
Revision int,
DateTime datetimeoffset(7),
Value varbinary(255)
CONSTRAINT [PK_WidgetPropertyValues] PRIMARY KEY CLUSTERED (
[WidgetId] ASC,
[PropertyIdId] ASC,
[Revision] ASC
)
)
ALTER TABLE dbo.WidgetPropertyValues WITH CHECK ADD CONSTRAINT FK_WidgetPropertyValues_WidgetProperties FOREIGN KEY( PropertyId )
REFERENCES dbo.WidgetProperties ( PropertyId )
ALTER TABLE dbo.WidgetPropertyValues WITH CHECK ADD CONSTRAINT FK_WidgetPropertyValues_Widgets FOREIGN KEY( WidgetId )
REFERENCES dbo.Widgets ( WidgetId )
So you see how WidgetId, PropertyId, Revision is a composite key and the table stores the entire history of Values (the current values are obtained by getting the rows with the biggest Revision number for each WidgetId + PropertyId.
I want to know how I can set-up the Revision column to increment by 1 for each WidgetId + PropertyId. I want data like this:
WidgetId, PropertyId, Revision, DateTime, Value
------------------------------------------------
1 1 1 123
1 1 2 456
1 1 3 789
1 2 1 012
IDENTITY wouldn't work because it's global to the table and the same applies with SEQUENCE objects.
Update I can think of a possible solution using an INSTEAD OF INSERT trigger:
CREATE TRIGGER WidgetPropertyValueInsertTrigger ON WidgetPropertyValues
INSTEAD OF INSERT
AS
BEGIN
DECLARE #maxRevision int
SELECT #maxRevision = ISNULL( MAX( Revision ), 0 ) FROM WidgetPropertyValues WHERE WidgetId = INSERTED.WidgetId AND PropertyId = INSERTED.PropertyId
INSERT INTO WidgetPropertyValues VALUES (
INSERTED.WidgetId,
INSERTED.PropertyId,
#maxRevision + 1,
INSERTED.DateTime,
INSERTED.Value,
)
END
(For the uninitiated, INSTEAD OF INSERT triggers run instead of any INSERT operation on the table, compared to a normal INSERT-trigger which runs before or after an INSERT operation)
I think this would be concurrency-safe because all INSERT operations have an implicit transaction, and any associated triggers are executed in the same transaction context, which should mean it's safe. Unless anyone can claim otherwise?
You code has a race condition - a concurrent transaction might select and insert the same Revision between your SELECT and your INSERT. That could cause occasional (primary) key violations in concurrent environment (forcing you to retry the entire transaction).
Instead of retrying the whole transaction, a better strategy is to retry only the INSERT. Simply put your code in a loop, and if key violation (and only key violation) happens, increment the Revision and try again.
Something like this (writing from my head):
DECLARE #maxRevision int = (
SELECT
#maxRevision = ISNULL(MAX(Revision), 0)
FROM
WidgetPropertyValues
WHERE
WidgetId = INSERTED.WidgetId
AND PropertyId = INSERTED.PropertyId
);
WHILE 0 = 0 BEGIN
SET #maxRevision = #maxRevision + 1;
BEGIN TRY
INSERT INTO WidgetPropertyValues
VALUES (
INSERTED.WidgetId,
INSERTED.PropertyId,
#maxRevision,
INSERTED.DateTime,
INSERTED.Value,
);
BREAK;
END TRY
BEGIN CATCH
-- The error was different from key violation,
-- in which case we just pass it back to caller.
IF ERROR_NUMBER() <> 2627
THROW;
-- Otherwise, this was a key violation, and we can let the loop
-- enter the next iteration (to retry with the incremented value).
END CATCH
END

How to make SQL Server table primary key auto increment with some characters

I have a table like this :
create table ReceptionR1
(
numOrdre char(20) not null,
dateDepot datetime null,
...
)
I want to increment my id field (numOrdre) like '225/2015','226/2015',...,'1/2016' etc. What should I have to do for that?
2015 means the actual year.
Please let me know any possible way.
You really, and I mean Really don't want to do such a thing, especially as your primary key. You better use a simple int identity column for you primary key and add a non nullable create date column of type datetime2 with a default value of sysDateTime().
Create the increment number by year either as a calculated column or by using an instead of insert trigger (if you don't want it to be re-calculated each time). This can be done fairly easy with the use of row_number function.
As everyone else has said - don't use this as your primary key! But you could do the following, if you're on SQL Server 2012 or newer:
-- step 1 - create a sequence
CREATE SEQUENCE dbo.SeqOrderNo AS INT
START WITH 1001 -- start with whatever value you need
INCREMENT BY 1
NO CYCLE
NO CACHE;
-- create your table - use INT IDENTITY as your primary key
CREATE TABLE dbo.ReceptionR1
(
ID INT IDENTITY
CONSTRAINT PK_ReceptionR1 PRIMARY KEY CLUSTERED,
dateDepot DATE NOT NULL,
...
-- add a colum called "SeqNumber" that gets filled from the sequence
SeqNumber INT,
-- you can add a *computed* column here
OrderNo = CAST(YEAR(dateDepot) AS VARCHAR(4)) + '/' + CAST(SeqNumber AS VARCHAR(4))
)
So now, when you insert a row, it has a proper and well defined primary key (ID), and when you fill the SeqNumber with
INSERT INTO dbo.ReceptionR1 (dateDepot, SeqNumber)
VALUES (SYSDATETIME(), NEXT VALUE FOR dbo.SeqOrderNo)
then the SeqNumber column gets the next value for the sequence, and the OrderNo computed column gets filled with 2015/1001, 2015/1002 and so forth.
Now when 2016 comes around, you just reset the sequence back to a starting value:
ALTER SEQUENCE dbo.SeqOrderNo RESTART WITH 1000;
and you're done - the rest of your solution works as before.
If you want to make sure you never accidentally insert a duplicate value, you can even put a unique index on your OrderNo column in your table.
Once more, you cannot use the combo field as your primary key. This solution sort or works on earlier versions of SQL and calculates the new annual YearlySeq counter automatically - but you had better have an index on dateDepot and you might still have issues if there are many, many (100's of thousands) of rows per year.
In short: fight the requirement.
Given
create table dbo.ReceptionR1
(
ReceptionR1ID INT IDENTITY PRIMARY KEY,
YearlySeq INT ,
dateDepot datetime DEFAULT (GETDATE()) ,
somethingElse varchar(99) null,
numOrdre as LTRIM(STR(YearlySeq)) + '/' + CONVERT(CHAR(4),dateDepot,111)
)
GO
CREATE TRIGGER R1Insert on dbo.ReceptionR1 for INSERT
as
UPDATE tt SET YearlySeq = ISNULL(ii.ReceptionR1ID - (SELECT MIN(ReceptionR1ID) FROM dbo.ReceptionR1 xr WHERE DATEPART(year,xr.dateDepot) = DATEPART(year,ii.dateDepot) and xr.ReceptionR1ID <> ii.ReceptionR1ID ),0) + 1
FROM dbo.ReceptionR1 tt
JOIN inserted ii on ii.ReceptionR1ID = tt.ReceptionR1ID
GO
insert into ReceptionR1 (somethingElse) values ('dumb')
insert into ReceptionR1 (somethingElse) values ('requirements')
insert into ReceptionR1 (somethingElse) values ('lead')
insert into ReceptionR1 (somethingElse) values ('to')
insert into ReceptionR1 (somethingElse) values ('big')
insert into ReceptionR1 (somethingElse) values ('problems')
insert into ReceptionR1 (somethingElse) values ('later')
select * from ReceptionR1

SQL - How to INSERT a foreign key as a value for a column

I know this is rather basic, and i've searched for answers for quite some time, but I'm troubled.
I don't know how to make my coding readable on here but here it is.
Here's the query for making the table in question:
CREATE TABLE customer
( customer_id INT NOT NULL CONSTRAINT customer_pk PRIMARY KEY IDENTITY,
first_name VARCHAR(20) NOT NULL,
surname VARCHAR(20) NOT NULL,
dob DATETIME NOT NULL,
home_address VARCHAR(50) NOT NULL,
contact_number VARCHAR(10) NOT NULL,
referrer_id INT NULL FOREIGN KEY REFERENCES customer(customer_id),
);
And here's the problem code:
--fill customer table
INSERT INTO customer
VALUES ( 'Harold', 'Kumar', '2010-07-07 14:03:54', '3 Blue Ln, Perth', 0812391245, NULL )
INSERT INTO customer
VALUES ( 'Bingo', 'Washisnameoh', '2010-09-21 12:30:07', '3 Red St, Perth', 0858239471, NULL )
INSERT INTO customer
VALUES ( 'John', 'Green', '2010-11-07 14:13:34', '4 Blue St, Perth', 0423904823, NULL )
INSERT INTO customer
VALUES ( 'Amir', 'Blumenfeld', '2010-11-01 11:03:04', '166 Yellow Rd, Perth', 0432058323, NULL)
INSERT INTO customer
VALUES ( 'Hank', 'Green', '2010-07-07 16:04:24', '444 Orange Crs, Perth', 0898412429, 8)
(Specifically the line with the 8 value at the end.)
When executing the second query it responds with this:
Msg 547, Level 16, State 0, Line 1
The INSERT statement conflicted
with the FOREIGN KEY SAME TABLE constraint
"FK_customer_referr__5772F790". The conflict occurred in database
"master", table "dbo.customer", column 'customer_id'. The statement
has been terminated.
Appreciate your help with this.
1)
You have a primary key on customer_id - and your insert statements do not have value for customer id
2)
You have a self referencing foreign key in the form of referrer_id referring to customer_id.
When you are inserting a record with referrer_id which is not null, in your case which is '8', make sure you already inserted a record with customer_id '8'
How do you know that the referrer_id is supposed to be 8 ??
What you need to do is catch the value of the customer_id inserted, and then used that in your second query:
DECLARE #referToID INT
INSERT INTO dbo.Customer(first_name, surname, dob, home_address, contact_number, referrer_id)
VALUES ('Harold', 'Kumar', '2010-07-07 14:03:54', '3 Blue Ln, Perth', 0812391245, NULL)
SELECT #ReferToID = SCOPE_IDENTITY() ; -- catch the newly given IDENTITY ID
INSERT INTO dbo.Customer(first_name, surname, dob, home_address, contact_number, referrer_id)
VALUES ('Hank', 'Green', '2010-07-07 16:04:24', '444 Orange Crs, Perth', 0898412429, #ReferToID)
I don't know which row you want to refer to (you didn't specify) - but I hope you understand the mechanism:
insert the new row into your table
get the newly inserted ID by using SCOPE_IDENTITY
insert the next row which refers to that first row and use that value returned by SCOPE_IDENTITY
Update: if you really want to have a given row reference itself (strange concept.....), then you'd need to do it in two steps:
insert the new row into your table
get the newly inserted ID by using SCOPE_IDENTITY
update that row to set the referrer_id
Something like this:
DECLARE #NewCustomerID INT
INSERT INTO dbo.Customer(first_name, surname, dob, home_address, contact_number)
VALUES ('Hank', 'Green', '2010-07-07 16:04:24', '444 Orange Crs, Perth', 0898412429)
SELECT #NewCustomerID = SCOPE_IDENTITY() ; -- catch the newly given IDENTITY ID
UPDATE dbo.Customer
SET referrer_id = #NewCustomerID
WHERE customer_id = #NewCustomerID
The only problem you have here is the identity must have a seed value which can be like Identity(1,1) where the first 1 is the starting point and the send 1 is the auto seed number...the re run your insert statement

Return Data and Update Row without Multiple Lookups?

I have a stored procedure that looks up an article based on the article's title. But I also need to increment a column in the same table that counts the number of times the article is viewed.
Trying to be as efficient as possible, I see two possible ways to approach this:
Perform one SELECT to obtain the PK on the target row. Then use that PK to increment the number of views and, finally, another SELECT using the PK to return the article data.
Perform one SELECT to return the article data to my application, and then use the returned PK to make another round trip to the database to increment the number of views.
I know #1 would be pretty fast, but it's three lookups. And #2 requires two round trips to the database. Is there no way to optimize this task?
EDIT Based on feedback, I came up with the following. Thanks for any comments or constructive criticism.
DECLARE #Slug VARCHAR(250) -- Stored procedure argument
-- declare #UpdatedArticle table variable
DECLARE #UpdatedArticle TABLE
(
ArtID INT,
ArtUserID UNIQUEIDENTIFIER,
ArtSubcategoryID INT,
ArtTitle VARCHAR(250),
ArtHtml VARCHAR(MAX),
ArtDescription VARCHAR(350),
ArtKeywords VARCHAR(250),
ArtLicenseID VARCHAR(10),
ArtViews BIGINT,
ArtCreated DATETIME2(7),
ArtUpdated DATETIME2(7)
);
UPDATE Article
SET ArtViews = ArtViews + 1
OUTPUT
INSERTED.ArtID,
INSERTED.ArtUserID,
inserted.ArtSubcategoryID,
INSERTED.ArtTitle,
INSERTED.ArtHtml,
INSERTED.ArtDescription,
INSERTED.ArtKeywords,
INSERTED.ArtLicenseID,
INSERTED.ArtViews,
INSERTED.ArtUpdated,
INSERTED.ArtCreated
INTO #UpdatedArticle
WHERE ArtSlugHash = CHECKSUM(#Slug) AND ArtSlug = #Slug AND ArtApproved = 1
SELECT a.ArtID, a.ArtUserID, a.ArtTitle, a.ArtHtml, a.ArtDescription, a.ArtKeywords, a.ArtLicenseID,
l.licTitle, a.ArtViews, a.ArtCreated, a.ArtUpdated, s.SubID, s.SubTitle, c.CatID, c.CatTitle,
sec.SecID, sec.SecTitle, u.UsrDisplayName AS UserName
FROM #UpdatedArticle a
INNER JOIN Subcategory s ON a.ArtSubcategoryID = s.SubID
INNER JOIN Category c ON s.SubCatID = c.CatID
INNER JOIN [Section] sec ON c.CatSectionID = sec.SecID
INNER JOIN [User] u ON a.ArtUserID = u.UsrID
INNER JOIN License l ON a.ArtLicenseID = l.LicID
Here is a way using the OUTPUT statement (SQL Server 2005 onwards), in a single update statement:
IF OBJECT_ID ('Books', 'U') IS NOT NULL
DROP TABLE dbo.Books;
CREATE TABLE dbo.Books
(
BookID int NOT NULL PRIMARY KEY,
BookTitle nvarchar(50) NOT NULL,
ModifiedDate datetime NOT NULL,
NumViews int not null CONSTRAINT DF_Numviews DEFAULT (0)
);
INSERT INTO dbo.Books
(BookID, BookTitle, ModifiedDate)
VALUES
(106, 'abc', GETDATE()),
(107, 'Great Expectations', GETDATE());
-- declare #UpdateOutput1 table variable
DECLARE #UpdateOutput1 table
(
BookID int,
BookTitle nvarchar(50),
ModifiedDate datetime,
NumViews int
);
-- >>>> here is the update of Numviews and the Fetch
-- update Numviews in Books table, and retrive the row
UPDATE Books
SET
NumViews = NumViews + 1
OUTPUT
INSERTED.BookID,
INSERTED.BookTitle,
INSERTED.ModifiedDate,
INSERTED.NumViews
INTO #UpdateOutput1
WHERE BookID = 106
-- view updated row in Books table
SELECT * FROM Books;
-- view output row in #UpdateOutput1 variable
SELECT * FROM #UpdateOutput1;

Is it possible to a db constraint in for this rule?

I wish to make sure that my data has a constraint the following check (constraint?) in place
This table can only have one BorderColour per hub/category. (eg. #FFAABB)
But it can have multiple nulls. (all the other rows are nulls, for this field)
Table Schema
ArticleId INT PRIMARY KEY NOT NULL IDENTITY
HubId TINYINT NOT NULL
CategoryId INT NOT NULL
Title NVARCHAR(100) NOT NULL
Content NVARCHAR(MAX) NOT NULL
BorderColour VARCHAR(7) -- Can be nullable.
I'm gussing I would have to make a check constraint? But i'm not sure how, etc.
sample data.
1, 1, 1, 'test', 'blah...', '#FFAACC'
1, 1, 1, 'test2', 'sfsd', NULL
1, 1, 2, 'Test3', 'sdfsd dsf s', NULL
1, 1, 2, 'Test4', 'sfsdsss', '#AABBCC'
now .. if i add the following line, i should get some sql error....
INSERT INTO tblArticle VALUES (1, 2, 'aaa', 'bbb', '#ABABAB')
any ideas?
CHECK constraints are ordinarily applied to a single row, however, you can cheat using a UDF:
CREATE FUNCTION dbo.CheckSingleBorderColorPerHubCategory
(
#HubID tinyint,
#CategoryID int
)
RETURNS BIT
AS BEGIN
RETURN CASE
WHEN EXISTS
(
SELECT HubID, CategoryID, COUNT(*) AS BorderColorCount
FROM Articles
WHERE HubID = #HubID
AND CategoryID = #CategoryID
AND BorderColor IS NOT NULL
GROUP BY HubID, CategoryID
HAVING COUNT(*) > 1
) THEN 1
ELSE 0
END
END
Then create the constraint and reference the UDF:
ALTER TABLE Articles
ADD CONSTRAINT CK_Articles_SingleBorderColorPerHubCategory
CHECK (dbo.CheckSingleBorderColorPerHubCategory(HubID, CategoryID) = 1)
Another option that is available is available if you are running SQL2008. This version of SQL has a feature called filtered indexes.
Using this feature you can create a unique index that includes all rows except those where BorderColour is null.
CREATE TABLE [dbo].[UniqueExceptNulls](
[HubId] [tinyint] NOT NULL,
[CategoryId] [int] NOT NULL,
[BorderColour] [varchar](7) NULL,
)
GO
CREATE UNIQUE NONCLUSTERED INDEX UI_UniqueExceptNulls
ON [UniqueExceptNulls] (HubID,CategoryID)
WHERE BorderColour IS NOT NULL
This approach is cleaner than the approach in my other answer because it doesn't require creating extra computed columns. It also doesn't require you to have a unique column in the table, although you should have that anyway.
Finally, it will also be much faster than the UDF/Check Constraint solutions.
You can also do a trigger with something like this (this is actually overkill - you can make it cleaner by assuming the database is already in a valid state - i.e. UNION instead of UNION all etc):
IF EXISTS (
SELECT COUNT(BorderColour)
FROM (
SELECT INSERTED.HubId, INSERTED.CategoryId, INSERTED.BorderColour
UNION ALL
SELECT HubId, CategoryId, BorderColour
FROM tblArticle
WHERE EXISTS (
SELECT *
FROM INSERTED
WHERE tblArticle.HubId = INSERTED.HubId
AND tblArticle.CategoryId = INSERTED.CategoryId
)
) AS X
GROUP BY HubId, CategoryId
HAVING COUNT(BorderColour) > 1
)
RAISEERROR
If you have a unique column in your table, then you can accomplish this by creating a unique constraint on a computer column.
The following sample created a table that behaved as you described in your requirements and should perform better than a UDF based check constraint. You might also be able to improve the performance further by making the computed column persisted.
CREATE TABLE [dbo].[UQTest](
[Id] INT IDENTITY(1,1) NOT NULL,
[HubId] TINYINT NOT NULL,
[CategoryId] INT NOT NULL,
[BorderColour] varchar(7) NULL,
[BorderColourUNQ] AS (CASE WHEN [BorderColour] IS NULL
THEN cast([ID] as varchar(50))
ELSE cast([HuBID] as varchar(3)) + '_' +
cast([CategoryID] as varchar(20)) END
),
CONSTRAINT [UQTest_Unique]
UNIQUE ([BorderColourUNQ])
)
The one possibly undesirable facet of the above implementation is that it allows a category/hub to have both a Null AND a color defined. If this is a problem, let me know and I'll tweak my answer to address that.
PS: Sorry about my previous (incorrect) answer. I didn't read the question closely enough.

Resources